selinux-refpolicy/policy/modules/services/hadoop.if

352 lines
9.0 KiB
Plaintext
Raw Normal View History

hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## <summary>Software for reliable, scalable, distributed computing.</summary>
#######################################
## <summary>
## The template to define a hadoop domain.
## </summary>
## <param name="domain_prefix">
## <summary>
## Domain prefix to be used.
## </summary>
## </param>
#
template(`hadoop_domain_template',`
gen_require(`
attribute hadoop_domain;
type hadoop_log_t, hadoop_var_lib_t, hadoop_var_run_t;
type hadoop_exec_t, hadoop_hsperfdata_t;
')
########################################
#
# Shared declarations.
#
type hadoop_$1_t, hadoop_domain;
domain_type(hadoop_$1_t)
domain_entry_file(hadoop_$1_t, hadoop_exec_t)
role system_r types hadoop_$1_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
type hadoop_$1_initrc_t;
type hadoop_$1_initrc_exec_t;
init_script_domain(hadoop_$1_initrc_t, hadoop_$1_initrc_exec_t)
role system_r types hadoop_$1_initrc_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
type hadoop_$1_initrc_var_run_t;
files_pid_file(hadoop_$1_initrc_var_run_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
type hadoop_$1_lock_t;
files_lock_file(hadoop_$1_lock_t)
type hadoop_$1_log_t;
logging_log_file(hadoop_$1_log_t)
type hadoop_$1_tmp_t;
files_tmp_file(hadoop_$1_tmp_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
type hadoop_$1_var_lib_t;
files_type(hadoop_$1_var_lib_t)
####################################
#
# Shared hadoop_$1 policy.
#
allow hadoop_$1_t self:capability { chown kill setgid setuid };
allow hadoop_$1_t self:key search;
allow hadoop_$1_t self:process { execmem getsched setsched sigkill signal };
allow hadoop_$1_t self:fifo_file rw_fifo_file_perms;
allow hadoop_$1_t self:tcp_socket create_stream_socket_perms;
allow hadoop_$1_t self:unix_dgram_socket create_socket_perms;
allow hadoop_$1_t self:udp_socket create_socket_perms;
dontaudit hadoop_$1_t self:netlink_route_socket rw_netlink_socket_perms;
allow hadoop_$1_t hadoop_domain:process signull;
manage_files_pattern(hadoop_$1_t, hadoop_$1_log_t, hadoop_$1_log_t)
filetrans_pattern(hadoop_$1_t, hadoop_log_t, hadoop_$1_log_t, { dir file })
logging_search_logs(hadoop_$1_t)
manage_dirs_pattern(hadoop_$1_t, hadoop_$1_var_lib_t, hadoop_$1_var_lib_t)
manage_files_pattern(hadoop_$1_t, hadoop_$1_var_lib_t, hadoop_$1_var_lib_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
filetrans_pattern(hadoop_$1_t, hadoop_var_lib_t, hadoop_$1_var_lib_t, file)
files_search_var_lib(hadoop_$1_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
manage_files_pattern(hadoop_$1_t, hadoop_$1_initrc_var_run_t, hadoop_$1_initrc_var_run_t)
2011-01-05 14:36:13 +00:00
filetrans_pattern(hadoop_$1_t, hadoop_var_run_t, hadoop_$1_initrc_var_run_t, file)
files_search_pids(hadoop_$1_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
allow hadoop_$1_t hadoop_hsperfdata_t:dir manage_dir_perms;
manage_files_pattern(hadoop_$1_t, hadoop_$1_tmp_t, hadoop_$1_tmp_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
filetrans_pattern(hadoop_$1_t, hadoop_hsperfdata_t, hadoop_$1_tmp_t, file)
files_tmp_filetrans(hadoop_$1_t, hadoop_hsperfdata_t, dir)
kernel_read_network_state(hadoop_$1_t)
kernel_read_system_state(hadoop_$1_t)
corecmd_exec_bin(hadoop_$1_t)
corecmd_exec_shell(hadoop_$1_t)
corenet_all_recvfrom_unlabeled(hadoop_$1_t)
corenet_all_recvfrom_netlabel(hadoop_$1_t)
corenet_tcp_bind_all_nodes(hadoop_$1_t)
corenet_tcp_sendrecv_generic_if(hadoop_$1_t)
corenet_udp_sendrecv_generic_if(hadoop_$1_t)
corenet_tcp_sendrecv_generic_node(hadoop_$1_t)
corenet_udp_sendrecv_generic_node(hadoop_$1_t)
corenet_tcp_sendrecv_all_ports(hadoop_$1_t)
corenet_udp_bind_generic_node(hadoop_$1_t)
# Hadoop uses high ordered random ports for services
# If permanent ports are chosen, remove line below and lock down
corenet_tcp_connect_generic_port(hadoop_$1_t)
dev_read_rand(hadoop_$1_t)
dev_read_urand(hadoop_$1_t)
dev_read_sysfs(hadoop_$1_t)
files_read_etc_files(hadoop_$1_t)
init_read_utmp(hadoop_$1_t)
init_use_fds(hadoop_$1_t)
init_use_script_fds(hadoop_$1_t)
init_use_script_ptys(hadoop_$1_t)
kerberos_use(hadoop_$1_t)
kernel_read_kernel_sysctls(hadoop_$1_t)
kernel_read_sysctl(hadoop_$1_t)
logging_send_audit_msgs(hadoop_$1_t)
logging_send_syslog_msg(hadoop_$1_t)
miscfiles_read_localization(hadoop_$1_t)
su_exec(hadoop_$1_t)
sysnet_read_config(hadoop_$1_t)
hadoop_exec_config(hadoop_$1_t)
java_exec(hadoop_$1_t)
auth_domtrans_chkpwd(hadoop_$1_t)
optional_policy(`
nscd_socket_use(hadoop_$1_t)
')
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
####################################
#
# Shared hadoop_$1 initrc policy.
#
allow hadoop_$1_initrc_t self:capability { setuid setgid };
dontaudit hadoop_$1_initrc_t self:capability sys_tty_config;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
allow hadoop_$1_initrc_t self:process setsched;
allow hadoop_$1_initrc_t self:fifo_file rw_fifo_file_perms;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
allow hadoop_$1_initrc_t hadoop_$1_t:process { signal signull };
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
domtrans_pattern(hadoop_$1_initrc_t, hadoop_exec_t, hadoop_$1_t)
manage_files_pattern(hadoop_$1_initrc_t, hadoop_$1_lock_t, hadoop_$1_lock_t)
files_lock_filetrans(hadoop_$1_initrc_t, hadoop_$1_lock_t, file)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
files_search_locks(hadoop_$1_initrc_t)
manage_files_pattern(hadoop_$1_initrc_t, hadoop_$1_initrc_var_run_t, hadoop_$1_initrc_var_run_t)
filetrans_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_$1_initrc_var_run_t, file)
files_search_pids(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
manage_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t)
filetrans_pattern(hadoop_$1_initrc_t, hadoop_log_t, hadoop_$1_log_t, { dir file })
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
logging_search_logs(hadoop_$1_initrc_t)
manage_dirs_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_var_run_t)
manage_files_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_var_run_t)
kernel_read_kernel_sysctls(hadoop_$1_initrc_t)
kernel_read_sysctl(hadoop_$1_initrc_t)
kernel_read_system_state(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
corecmd_exec_bin(hadoop_$1_initrc_t)
corecmd_exec_shell(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
files_read_etc_files(hadoop_$1_initrc_t)
files_read_usr_files(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
consoletype_exec(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
fs_getattr_xattr_fs(hadoop_$1_initrc_t)
fs_search_cgroup_dirs(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
term_use_generic_ptys(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
hadoop_exec_config(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
init_rw_utmp(hadoop_$1_initrc_t)
init_use_fds(hadoop_$1_initrc_t)
init_use_script_ptys(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
logging_send_syslog_msg(hadoop_$1_initrc_t)
logging_send_audit_msgs(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
miscfiles_read_localization(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
userdom_dontaudit_search_user_home_dirs(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
optional_policy(`
nscd_socket_use(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
')
########################################
## <summary>
## Role access for hadoop.
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="role">
## <summary>
## Role allowed access.
## </summary>
## </param>
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## <param name="domain">
## <summary>
## Domain allowed access.
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## </param>
## <rolecap/>
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
#
interface(`hadoop_role',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
type hadoop_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
hadoop_domtrans($2)
role $1 types hadoop_t;
allow $2 hadoop_t:process { ptrace signal_perms };
ps_process_pattern($2, hadoop_t)
hadoop_domtrans_zookeeper_client($2)
role $1 types zookeeper_t;
allow $2 zookeeper_t:process { ptrace signal_perms };
ps_process_pattern($2, zookeeper_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
## Execute hadoop in the
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## hadoop domain.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed to transition.
## </summary>
## </param>
#
interface(`hadoop_domtrans',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
type hadoop_t, hadoop_exec_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
domtrans_pattern($1, hadoop_exec_t, hadoop_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
## Execute zookeeper client in the
## zookeeper client domain.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed to transition.
## </summary>
## </param>
#
interface(`hadoop_domtrans_zookeeper_client',`
gen_require(`
type zookeeper_t, zookeeper_exec_t;
')
corecmd_search_bin($1)
domtrans_pattern($1, zookeeper_exec_t, zookeeper_t)
')
########################################
## <summary>
## Execute zookeeper server in the
## zookeeper server domain.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed to transition.
## </summary>
## </param>
#
interface(`hadoop_domtrans_zookeeper_server',`
gen_require(`
type zookeeper_server_t, zookeeper_server_exec_t;
')
corecmd_search_bin($1)
domtrans_pattern($1, zookeeper_server_exec_t, zookeeper_server_t)
')
########################################
## <summary>
## Execute zookeeper server in the
## zookeeper domain.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed to transition.
## </summary>
## </param>
#
2010-10-07 16:31:41 +00:00
interface(`hadoop_initrc_domtrans_zookeeper_server',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
type zookeeper_server_initrc_exec_t;
')
init_labeled_script_domtrans($1, zookeeper_server_initrc_exec_t)
')
########################################
## <summary>
## Give permission to a domain to read
## hadoop_etc_t
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="domain">
## <summary>
## Domain needing read permission
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## </param>
#
interface(`hadoop_read_config',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
type hadoop_etc_t;
')
read_files_pattern($1, hadoop_etc_t, hadoop_etc_t)
read_lnk_files_pattern($1, hadoop_etc_t, hadoop_etc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
## Give permission to a domain to
## execute hadoop_etc_t
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="domain">
## <summary>
## Domain needing read and execute
## permission
## </summary>
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </param>
#
interface(`hadoop_exec_config',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
type hadoop_etc_t;
')
hadoop_read_config($1)
allow $1 hadoop_etc_t:file exec_file_perms;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')