bc71a042d8
On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil> |
||
---|---|---|
.. | ||
auditadm.fc | ||
auditadm.if | ||
auditadm.te | ||
dbadm.fc | ||
dbadm.if | ||
dbadm.te | ||
guest.fc | ||
guest.if | ||
guest.te | ||
logadm.fc | ||
logadm.if | ||
logadm.te | ||
metadata.xml | ||
secadm.fc | ||
secadm.if | ||
secadm.te | ||
staff.fc | ||
staff.if | ||
staff.te | ||
sysadm.fc | ||
sysadm.if | ||
sysadm.te | ||
unprivuser.fc | ||
unprivuser.if | ||
unprivuser.te | ||
webadm.fc | ||
webadm.if | ||
webadm.te | ||
xguest.fc | ||
xguest.if | ||
xguest.te |