Commit Graph

13 Commits

Author SHA1 Message Date
Chris PeBenito 0992763548 Update callers for "pid" to "runtime" interface rename.
Signed-off-by: Chris PeBenito <pebenito@ieee.org>
2020-06-28 16:03:45 -04:00
Chris PeBenito 69a403cd97 Rename *_var_run_t types to *_runtime_t.
Signed-off-by: Chris PeBenito <pebenito@ieee.org>
2019-09-30 20:02:43 -04:00
Chris PeBenito 3ab07a0e1e Move all files out of the old contrib directory. 2018-06-23 10:38:58 -04:00
Chris PeBenito 09248fa0db Move modules to contrib submodule. 2011-09-09 10:10:03 -04:00
Chris PeBenito 2810bc1455 Rearrange new hadoop/ipsec interfaces. 2011-01-13 13:09:25 -05:00
Chris PeBenito 371908d1c8 Rename new hadoop ipsec interfaces. 2011-01-13 12:56:12 -05:00
Paul Nuzzi 6237b7241b hadoop: labeled ipsec
On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote:
> On 12/16/10 12:32, Paul Nuzzi wrote:
>> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote:
>>> On 12/10/10 18:22, Paul Nuzzi wrote:
>>>> Added labeled IPSec support to hadoop.  SELinux will be able to enforce what services are allowed to
>>>> connect to.  Labeled IPSec can enforce the range of services they can receive from.  This enforces
>>>> the architecture of Hadoop without having to modify any of the code.  This adds a level of
>>>> confidentiality, integrity, and authentication provided outside the software stack.
>>>
>>> A few things.
>>>
>>> The verb used in Reference Policy interfaces for peer recv is recvfrom
>>> (a holdover from previous labeled networking implementations).  So the
>>> interfaces are like hadoop_recvfrom_datanode().
>>
>> Easy change.
>>
>>> It seems like setkey should be able to setcontext any type used on ipsec
>>> associations.  I think the best thing would be to add additional support
>>> to either the ipsec or corenetwork modules (I haven't decided which one
>>> yet) for associations.  So, say we have an interface called
>>> ipsec_spd_type() which adds the parameter type to the attribute
>>> ipsec_spd_types.  Then we can have an allow setkey_t
>>> ipsec_spd_types:association setkey; rule and we don't have to update it
>>> every time more labeled network is added.
>>
>> That seems a lot less clunky than updating setkey every time we add a new association.
>>
>>> This is definitely wrong since its not a file:
>>> +files_type(hadoop_lan_t)
>>
>> Let me know how you would like to handle associations and I could update the
>> patch.
>
> Lets go with putting the associations in corenetwork.
>
>>  Will the files_type error be cleared up when we re-engineer this?
>
> I'm not sure what you mean.  The incorrect rule was added in your patch.
>

Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services.

Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-13 08:22:32 -05:00
Chris PeBenito 60ca2bd83b Rearrange some lines in hadoop. 2011-01-05 10:22:10 -05:00
Chris PeBenito a45657403b Whitespace fixes in hadoop. 2011-01-05 09:36:13 -05:00
Paul Nuzzi fcb67e8cef hadoop: update to CDH3
On 12/15/2010 03:17 PM, Christopher J. PeBenito wrote:
> On 12/13/10 10:39, Paul Nuzzi wrote:
>> On 12/11/2010 04:01 AM, Dominick Grift wrote:
>> On 12/11/2010 12:22 AM, Paul Nuzzi wrote:
>>
>> Does hadoop depend on kerberos? If no then kerberos_use should probably
>> be optional.
>>
>>
>>> The new version of hadoop added Kerberos for authentication.
>
> So, to be explicit, its an unconditional requirement?

Yes.  I think all future versions of hadoop will be kerberos enabled.

> It seems like there should be a hadoop_home_t that is
> userdom_user_home_content()

Updated.

Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-05 09:35:40 -05:00
Chris PeBenito 2f8f8e1368 Typo fix in hadoop. 2010-10-07 12:31:41 -04:00
Chris PeBenito 641ac05468 Hadoop cleanup and module version bump.
* a pass cleaning up the style.
* adjusted some regular expressions in the file contexts: .* is the same as (.*)? since * means 0 or more matches.
* renamed a few interfaces
* two rules that I dropped as they require further explanation

> +files_read_all_files(hadoop_t)

A very big privilege.

and

> +fs_associate(hadoop_tasktracker_t)

This is a domain, so the only files with this type should be the /proc/pid ones, which don't require associate permissions.
2010-10-07 10:57:55 -04:00
Paul Nuzzi bc71a042d8 hadoop 1/10 -- unconfined
On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote:
> On 10/04/10 13:15, Paul Nuzzi wrote:
>> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote:
>>> On 10/01/10 11:17, Paul Nuzzi wrote:
>>>> On 10/01/2010 08:02 AM, Dominick Grift wrote:
>>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote:
>>>>>> I updated the patch based on recommendations from the mailing list.
>>>>>> All of hadoop's services are included in one module instead of
>>>>>> individual ones.  Unconfined and sysadm roles are given access to
>>>>>> hadoop and zookeeper client domain transitions. The services are started
>>>>>> using run_init.  Let me know what you think.
>>>>>
>>>>> Why do some hadoop domain need to manage generic tmp?
>>>>>
>>>>> files_manage_generic_tmp_dirs(zookeeper_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_t)
>>>>
>>>> This has to be done for Java JMX to work.  All of the files are written to
>>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while
>>>> all the files for each service are labeled with hadoop_*_tmp_t.  The first service
>>>> will end up owning the directory if it is not labeled tmp_t.
>>>
>>> The hsperfdata dir in /tmp certainly the bane of policy writers.  Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir.  I suggest you do something like
>>>
>>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir)
>>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir)
>>>
>>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file)
>>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file)
>>>
>>
>> That looks like a better way to handle the tmp_t problem.
>>
>> I changed the patch with your comments.  Hopefully this will be one of the last updates.
>> Tested on a CDH3 cluster as a module without any problems.
>
> There are several little issues with style, but it'll be easier just to fix them when its committed.
>
> Other comments inline.
>

I did my best locking down the ports hadoop uses.  Unfortunately the services use high, randomized ports making
tcp_connect_generic_port a must have.  Hopefully one day hadoop will settle on static ports.  I added hadoop_datanode port 50010 since it is important to lock down that service.  I changed the patch based on the rest of the comments.

Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-07 08:07:16 -04:00