Remove unused permission definitions from SELinux.
Many of these were only ever used in pre-mainline
versions of SELinux, prior to Linux 2.6.0. Some of them
were used in the legacy network or compat_net=1 checks
that were disabled by default in Linux 2.6.18 and
fully removed in Linux 2.6.30.
The corresponding classmap declarations were removed from the
mainline kernel in:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=42a9699a9fa179c0054ea3cf5ad3cc67104a6162
Permissions never used in mainline Linux:
file swapon
filesystem transition
tcp_socket { connectto newconn acceptfrom }
node enforce_dest
unix_stream_socket { newconn acceptfrom }
Legacy network checks, removed in 2.6.30:
socket { recv_msg send_msg }
node { tcp_recv tcp_send udp_recv udp_send rawip_recv rawip_send dccp_recv dccp_send }
netif { tcp_recv tcp_send udp_recv udp_send rawip_recv rawip_send dccp_recv dccp_send }
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote:
> On 12/16/10 12:32, Paul Nuzzi wrote:
>> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote:
>>> On 12/10/10 18:22, Paul Nuzzi wrote:
>>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to
>>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces
>>>> the architecture of Hadoop without having to modify any of the code. This adds a level of
>>>> confidentiality, integrity, and authentication provided outside the software stack.
>>>
>>> A few things.
>>>
>>> The verb used in Reference Policy interfaces for peer recv is recvfrom
>>> (a holdover from previous labeled networking implementations). So the
>>> interfaces are like hadoop_recvfrom_datanode().
>>
>> Easy change.
>>
>>> It seems like setkey should be able to setcontext any type used on ipsec
>>> associations. I think the best thing would be to add additional support
>>> to either the ipsec or corenetwork modules (I haven't decided which one
>>> yet) for associations. So, say we have an interface called
>>> ipsec_spd_type() which adds the parameter type to the attribute
>>> ipsec_spd_types. Then we can have an allow setkey_t
>>> ipsec_spd_types:association setkey; rule and we don't have to update it
>>> every time more labeled network is added.
>>
>> That seems a lot less clunky than updating setkey every time we add a new association.
>>
>>> This is definitely wrong since its not a file:
>>> +files_type(hadoop_lan_t)
>>
>> Let me know how you would like to handle associations and I could update the
>> patch.
>
> Lets go with putting the associations in corenetwork.
>
>> Will the files_type error be cleared up when we re-engineer this?
>
> I'm not sure what you mean. The incorrect rule was added in your patch.
>
Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services.
Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
On 12/15/2010 03:17 PM, Christopher J. PeBenito wrote:
> On 12/13/10 10:39, Paul Nuzzi wrote:
>> On 12/11/2010 04:01 AM, Dominick Grift wrote:
>> On 12/11/2010 12:22 AM, Paul Nuzzi wrote:
>>
>> Does hadoop depend on kerberos? If no then kerberos_use should probably
>> be optional.
>>
>>
>>> The new version of hadoop added Kerberos for authentication.
>
> So, to be explicit, its an unconditional requirement?
Yes. I think all future versions of hadoop will be kerberos enabled.
> It seems like there should be a hadoop_home_t that is
> userdom_user_home_content()
Updated.
Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
* a pass cleaning up the style.
* adjusted some regular expressions in the file contexts: .* is the same as (.*)? since * means 0 or more matches.
* renamed a few interfaces
* two rules that I dropped as they require further explanation
> +files_read_all_files(hadoop_t)
A very big privilege.
and
> +fs_associate(hadoop_tasktracker_t)
This is a domain, so the only files with this type should be the /proc/pid ones, which don't require associate permissions.
On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote:
> On 10/04/10 13:15, Paul Nuzzi wrote:
>> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote:
>>> On 10/01/10 11:17, Paul Nuzzi wrote:
>>>> On 10/01/2010 08:02 AM, Dominick Grift wrote:
>>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote:
>>>>>> I updated the patch based on recommendations from the mailing list.
>>>>>> All of hadoop's services are included in one module instead of
>>>>>> individual ones. Unconfined and sysadm roles are given access to
>>>>>> hadoop and zookeeper client domain transitions. The services are started
>>>>>> using run_init. Let me know what you think.
>>>>>
>>>>> Why do some hadoop domain need to manage generic tmp?
>>>>>
>>>>> files_manage_generic_tmp_dirs(zookeeper_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_t)
>>>>
>>>> This has to be done for Java JMX to work. All of the files are written to
>>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while
>>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service
>>>> will end up owning the directory if it is not labeled tmp_t.
>>>
>>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like
>>>
>>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir)
>>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir)
>>>
>>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file)
>>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file)
>>>
>>
>> That looks like a better way to handle the tmp_t problem.
>>
>> I changed the patch with your comments. Hopefully this will be one of the last updates.
>> Tested on a CDH3 cluster as a module without any problems.
>
> There are several little issues with style, but it'll be easier just to fix them when its committed.
>
> Other comments inline.
>
I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making
tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments.
Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>