selinux-refpolicy/policy/modules/services/hadoop.if

535 lines
12 KiB
Plaintext
Raw Normal View History

hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## <summary>Software for reliable, scalable, distributed computing.</summary>
#######################################
## <summary>
## The template to define a hadoop domain.
## </summary>
## <param name="domain_prefix">
## <summary>
## Domain prefix to be used.
## </summary>
## </param>
#
template(`hadoop_domain_template',`
gen_require(`
attribute hadoop_domain;
type hadoop_log_t, hadoop_var_lib_t, hadoop_var_run_t;
type hadoop_exec_t, hadoop_hsperfdata_t;
')
########################################
#
# Shared declarations.
#
type hadoop_$1_t, hadoop_domain;
domain_type(hadoop_$1_t)
domain_entry_file(hadoop_$1_t, hadoop_exec_t)
role system_r types hadoop_$1_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
type hadoop_$1_initrc_t;
type hadoop_$1_initrc_exec_t;
init_script_domain(hadoop_$1_initrc_t, hadoop_$1_initrc_exec_t)
role system_r types hadoop_$1_initrc_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
type hadoop_$1_initrc_var_run_t;
files_pid_file(hadoop_$1_initrc_var_run_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
type hadoop_$1_lock_t;
files_lock_file(hadoop_$1_lock_t)
type hadoop_$1_log_t;
logging_log_file(hadoop_$1_log_t)
type hadoop_$1_tmp_t;
files_tmp_file(hadoop_$1_tmp_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
type hadoop_$1_var_lib_t;
files_type(hadoop_$1_var_lib_t)
####################################
#
# Shared hadoop_$1 policy.
#
allow hadoop_$1_t self:capability { chown kill setgid setuid };
allow hadoop_$1_t self:process { execmem getsched setsched sigkill signal };
2011-01-05 15:22:10 +00:00
allow hadoop_$1_t self:key search;
allow hadoop_$1_t self:fifo_file rw_fifo_file_perms;
allow hadoop_$1_t self:unix_dgram_socket create_socket_perms;
2011-01-05 15:22:10 +00:00
allow hadoop_$1_t self:tcp_socket create_stream_socket_perms;
allow hadoop_$1_t self:udp_socket create_socket_perms;
dontaudit hadoop_$1_t self:netlink_route_socket rw_netlink_socket_perms;
allow hadoop_$1_t hadoop_domain:process signull;
manage_files_pattern(hadoop_$1_t, hadoop_$1_log_t, hadoop_$1_log_t)
filetrans_pattern(hadoop_$1_t, hadoop_log_t, hadoop_$1_log_t, { dir file })
logging_search_logs(hadoop_$1_t)
manage_dirs_pattern(hadoop_$1_t, hadoop_$1_var_lib_t, hadoop_$1_var_lib_t)
manage_files_pattern(hadoop_$1_t, hadoop_$1_var_lib_t, hadoop_$1_var_lib_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
filetrans_pattern(hadoop_$1_t, hadoop_var_lib_t, hadoop_$1_var_lib_t, file)
files_search_var_lib(hadoop_$1_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
manage_files_pattern(hadoop_$1_t, hadoop_$1_initrc_var_run_t, hadoop_$1_initrc_var_run_t)
2011-01-05 14:36:13 +00:00
filetrans_pattern(hadoop_$1_t, hadoop_var_run_t, hadoop_$1_initrc_var_run_t, file)
files_search_pids(hadoop_$1_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
allow hadoop_$1_t hadoop_hsperfdata_t:dir manage_dir_perms;
manage_files_pattern(hadoop_$1_t, hadoop_$1_tmp_t, hadoop_$1_tmp_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
filetrans_pattern(hadoop_$1_t, hadoop_hsperfdata_t, hadoop_$1_tmp_t, file)
files_tmp_filetrans(hadoop_$1_t, hadoop_hsperfdata_t, dir)
2011-01-05 15:22:10 +00:00
kernel_read_kernel_sysctls(hadoop_$1_t)
kernel_read_sysctl(hadoop_$1_t)
kernel_read_network_state(hadoop_$1_t)
kernel_read_system_state(hadoop_$1_t)
corecmd_exec_bin(hadoop_$1_t)
corecmd_exec_shell(hadoop_$1_t)
corenet_all_recvfrom_unlabeled(hadoop_$1_t)
corenet_all_recvfrom_netlabel(hadoop_$1_t)
corenet_tcp_bind_all_nodes(hadoop_$1_t)
corenet_tcp_sendrecv_generic_if(hadoop_$1_t)
corenet_udp_sendrecv_generic_if(hadoop_$1_t)
corenet_tcp_sendrecv_generic_node(hadoop_$1_t)
corenet_udp_sendrecv_generic_node(hadoop_$1_t)
corenet_tcp_sendrecv_all_ports(hadoop_$1_t)
corenet_udp_bind_generic_node(hadoop_$1_t)
# Hadoop uses high ordered random ports for services
# If permanent ports are chosen, remove line below and lock down
corenet_tcp_connect_generic_port(hadoop_$1_t)
dev_read_rand(hadoop_$1_t)
dev_read_urand(hadoop_$1_t)
dev_read_sysfs(hadoop_$1_t)
files_read_etc_files(hadoop_$1_t)
2011-01-05 15:22:10 +00:00
auth_domtrans_chkpwd(hadoop_$1_t)
2011-01-13 17:56:12 +00:00
hadoop_match_lan_spd(hadoop_$1_t)
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
init_read_utmp(hadoop_$1_t)
init_use_fds(hadoop_$1_t)
init_use_script_fds(hadoop_$1_t)
init_use_script_ptys(hadoop_$1_t)
logging_send_audit_msgs(hadoop_$1_t)
logging_send_syslog_msg(hadoop_$1_t)
miscfiles_read_localization(hadoop_$1_t)
sysnet_read_config(hadoop_$1_t)
hadoop_exec_config(hadoop_$1_t)
java_exec(hadoop_$1_t)
2011-01-05 15:22:10 +00:00
kerberos_use(hadoop_$1_t)
su_exec(hadoop_$1_t)
optional_policy(`
nscd_socket_use(hadoop_$1_t)
')
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
####################################
#
# Shared hadoop_$1 initrc policy.
#
allow hadoop_$1_initrc_t self:capability { setuid setgid };
dontaudit hadoop_$1_initrc_t self:capability sys_tty_config;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
allow hadoop_$1_initrc_t self:process setsched;
allow hadoop_$1_initrc_t self:fifo_file rw_fifo_file_perms;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
allow hadoop_$1_initrc_t hadoop_$1_t:process { signal signull };
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
domtrans_pattern(hadoop_$1_initrc_t, hadoop_exec_t, hadoop_$1_t)
manage_files_pattern(hadoop_$1_initrc_t, hadoop_$1_lock_t, hadoop_$1_lock_t)
files_lock_filetrans(hadoop_$1_initrc_t, hadoop_$1_lock_t, file)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
files_search_locks(hadoop_$1_initrc_t)
manage_files_pattern(hadoop_$1_initrc_t, hadoop_$1_initrc_var_run_t, hadoop_$1_initrc_var_run_t)
filetrans_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_$1_initrc_var_run_t, file)
files_search_pids(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
manage_files_pattern(hadoop_$1_initrc_t, hadoop_$1_log_t, hadoop_$1_log_t)
filetrans_pattern(hadoop_$1_initrc_t, hadoop_log_t, hadoop_$1_log_t, { dir file })
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
logging_search_logs(hadoop_$1_initrc_t)
manage_dirs_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_var_run_t)
manage_files_pattern(hadoop_$1_initrc_t, hadoop_var_run_t, hadoop_var_run_t)
kernel_read_kernel_sysctls(hadoop_$1_initrc_t)
kernel_read_sysctl(hadoop_$1_initrc_t)
kernel_read_system_state(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
corecmd_exec_bin(hadoop_$1_initrc_t)
corecmd_exec_shell(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
files_read_etc_files(hadoop_$1_initrc_t)
files_read_usr_files(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
consoletype_exec(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
fs_getattr_xattr_fs(hadoop_$1_initrc_t)
fs_search_cgroup_dirs(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
term_use_generic_ptys(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
hadoop_exec_config(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
init_rw_utmp(hadoop_$1_initrc_t)
init_use_fds(hadoop_$1_initrc_t)
init_use_script_ptys(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
logging_send_syslog_msg(hadoop_$1_initrc_t)
logging_send_audit_msgs(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
miscfiles_read_localization(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
userdom_dontaudit_search_user_home_dirs(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
optional_policy(`
nscd_socket_use(hadoop_$1_initrc_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
')
########################################
## <summary>
## Role access for hadoop.
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="role">
## <summary>
## Role allowed access.
## </summary>
## </param>
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## <param name="domain">
## <summary>
## Domain allowed access.
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## </param>
## <rolecap/>
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
#
interface(`hadoop_role',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
type hadoop_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
hadoop_domtrans($2)
role $1 types hadoop_t;
allow $2 hadoop_t:process { ptrace signal_perms };
ps_process_pattern($2, hadoop_t)
hadoop_domtrans_zookeeper_client($2)
role $1 types zookeeper_t;
allow $2 zookeeper_t:process { ptrace signal_perms };
ps_process_pattern($2, zookeeper_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
## Execute hadoop in the
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## hadoop domain.
## </summary>
## <param name="domain">
## <summary>
## Domain allowed to transition.
## </summary>
## </param>
#
interface(`hadoop_domtrans',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
type hadoop_t, hadoop_exec_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
domtrans_pattern($1, hadoop_exec_t, hadoop_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
2011-01-13 18:09:25 +00:00
## Give permission to a domain to
## recvfrom hadoop_t
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="domain">
## <summary>
2011-01-13 18:09:25 +00:00
## Domain needing recvfrom
## permission
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_recvfrom',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type hadoop_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
2011-01-13 18:09:25 +00:00
allow $1 hadoop_t:peer recv;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
2011-01-13 18:09:25 +00:00
## Execute zookeeper client in the
## zookeeper client domain.
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="domain">
## <summary>
## Domain allowed to transition.
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_domtrans_zookeeper_client',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type zookeeper_t, zookeeper_exec_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
corecmd_search_bin($1)
2011-01-13 18:09:25 +00:00
domtrans_pattern($1, zookeeper_exec_t, zookeeper_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
2011-01-13 18:09:25 +00:00
## Give permission to a domain to
## recvfrom zookeeper_t
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="domain">
## <summary>
2011-01-13 18:09:25 +00:00
## Domain needing recvfrom
## permission
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_recvfrom_zookeeper_client',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type zookeeper_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
2011-01-13 18:09:25 +00:00
allow $1 zookeeper_t:peer recv;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
2011-01-13 18:09:25 +00:00
## Execute zookeeper server in the
## zookeeper server domain.
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="domain">
## <summary>
2011-01-13 18:09:25 +00:00
## Domain allowed to transition.
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_domtrans_zookeeper_server',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type zookeeper_server_t, zookeeper_server_exec_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
2011-01-13 18:09:25 +00:00
corecmd_search_bin($1)
domtrans_pattern($1, zookeeper_server_exec_t, zookeeper_server_t)
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
########################################
## <summary>
## Give permission to a domain to
2011-01-13 18:09:25 +00:00
## recvfrom zookeeper_server_t
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </summary>
## <param name="domain">
## <summary>
2011-01-13 18:09:25 +00:00
## Domain needing recvfrom
## permission
## </summary>
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_recvfrom_zookeeper_server',`
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type zookeeper_server_t;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
2011-01-13 18:09:25 +00:00
allow $1 zookeeper_server_t:peer recv;
hadoop 1/10 -- unconfined On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote: > On 10/04/10 13:15, Paul Nuzzi wrote: >> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote: >>> On 10/01/10 11:17, Paul Nuzzi wrote: >>>> On 10/01/2010 08:02 AM, Dominick Grift wrote: >>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote: >>>>>> I updated the patch based on recommendations from the mailing list. >>>>>> All of hadoop's services are included in one module instead of >>>>>> individual ones. Unconfined and sysadm roles are given access to >>>>>> hadoop and zookeeper client domain transitions. The services are started >>>>>> using run_init. Let me know what you think. >>>>> >>>>> Why do some hadoop domain need to manage generic tmp? >>>>> >>>>> files_manage_generic_tmp_dirs(zookeeper_t) >>>>> files_manage_generic_tmp_dirs(hadoop_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t) >>>>> files_manage_generic_tmp_files(hadoop_$1_t) >>>>> files_manage_generic_tmp_dirs(hadoop_$1_t) >>>> >>>> This has to be done for Java JMX to work. All of the files are written to >>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while >>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service >>>> will end up owning the directory if it is not labeled tmp_t. >>> >>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like >>> >>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir) >>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir) >>> >>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file) >>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file) >>> >> >> That looks like a better way to handle the tmp_t problem. >> >> I changed the patch with your comments. Hopefully this will be one of the last updates. >> Tested on a CDH3 cluster as a module without any problems. > > There are several little issues with style, but it'll be easier just to fix them when its committed. > > Other comments inline. > I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2010-10-05 19:59:29 +00:00
')
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
########################################
## <summary>
2011-01-13 18:09:25 +00:00
## Execute zookeeper server in the
## zookeeper domain.
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## <param name="domain">
## <summary>
2011-01-13 18:09:25 +00:00
## Domain allowed to transition.
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_initrc_domtrans_zookeeper_server',`
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type zookeeper_server_initrc_exec_t;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
2011-01-13 18:09:25 +00:00
init_labeled_script_domtrans($1, zookeeper_server_initrc_exec_t)
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
########################################
## <summary>
## Give permission to a domain to
## recvfrom hadoop_datanode_t
## </summary>
## <param name="domain">
## <summary>
## Domain needing recvfrom
## permission
## </summary>
## </param>
#
interface(`hadoop_recvfrom_datanode',`
gen_require(`
type hadoop_datanode_t;
')
allow $1 hadoop_datanode_t:peer recv;
')
########################################
## <summary>
2011-01-13 18:09:25 +00:00
## Give permission to a domain to read
## hadoop_etc_t
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## <param name="domain">
## <summary>
2011-01-13 18:09:25 +00:00
## Domain needing read permission
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_read_config',`
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type hadoop_etc_t;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
2011-01-13 18:09:25 +00:00
read_files_pattern($1, hadoop_etc_t, hadoop_etc_t)
read_lnk_files_pattern($1, hadoop_etc_t, hadoop_etc_t)
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
########################################
## <summary>
## Give permission to a domain to
2011-01-13 18:09:25 +00:00
## execute hadoop_etc_t
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## <param name="domain">
## <summary>
2011-01-13 18:09:25 +00:00
## Domain needing read and execute
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## permission
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_exec_config',`
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type hadoop_etc_t;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
2011-01-13 18:09:25 +00:00
hadoop_read_config($1)
allow $1 hadoop_etc_t:file exec_file_perms;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
########################################
## <summary>
## Give permission to a domain to
2011-01-13 18:09:25 +00:00
## recvfrom hadoop_jobtracker_t
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## <param name="domain">
## <summary>
## Domain needing recvfrom
## permission
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_recvfrom_jobtracker',`
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type hadoop_jobtracker_t;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
2011-01-13 18:09:25 +00:00
allow $1 hadoop_jobtracker_t:peer recv;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
########################################
## <summary>
## Give permission to a domain to
2011-01-13 18:09:25 +00:00
## polmatch on hadoop_lan_t
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## <param name="domain">
## <summary>
2011-01-13 18:09:25 +00:00
## Domain needing polmatch
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## permission
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_match_lan_spd',`
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type hadoop_lan_t;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
2011-01-13 18:09:25 +00:00
allow $1 hadoop_lan_t:association polmatch;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
########################################
## <summary>
## Give permission to a domain to
2011-01-13 18:09:25 +00:00
## recvfrom hadoop_namenode_t
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## <param name="domain">
## <summary>
## Domain needing recvfrom
## permission
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_recvfrom_namenode',`
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type hadoop_namenode_t;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
2011-01-13 18:09:25 +00:00
allow $1 hadoop_namenode_t:peer recv;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
########################################
## <summary>
## Give permission to a domain to
2011-01-13 18:09:25 +00:00
## recvfrom hadoop_secondarynamenode_t
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## <param name="domain">
## <summary>
## Domain needing recvfrom
## permission
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_recvfrom_secondarynamenode',`
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type hadoop_secondarynamenode_t;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
2011-01-13 18:09:25 +00:00
allow $1 hadoop_secondarynamenode_t:peer recv;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
########################################
## <summary>
## Give permission to a domain to
2011-01-13 18:09:25 +00:00
## recvfrom hadoop_tasktracker_t
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
## </summary>
## <param name="domain">
## <summary>
## Domain needing recvfrom
## permission
## </summary>
## </param>
#
2011-01-13 18:09:25 +00:00
interface(`hadoop_recvfrom_tasktracker',`
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
gen_require(`
2011-01-13 18:09:25 +00:00
type hadoop_tasktracker_t;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')
2011-01-13 18:09:25 +00:00
allow $1 hadoop_tasktracker_t:peer recv;
hadoop: labeled ipsec On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote: > On 12/16/10 12:32, Paul Nuzzi wrote: >> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote: >>> On 12/10/10 18:22, Paul Nuzzi wrote: >>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to >>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces >>>> the architecture of Hadoop without having to modify any of the code. This adds a level of >>>> confidentiality, integrity, and authentication provided outside the software stack. >>> >>> A few things. >>> >>> The verb used in Reference Policy interfaces for peer recv is recvfrom >>> (a holdover from previous labeled networking implementations). So the >>> interfaces are like hadoop_recvfrom_datanode(). >> >> Easy change. >> >>> It seems like setkey should be able to setcontext any type used on ipsec >>> associations. I think the best thing would be to add additional support >>> to either the ipsec or corenetwork modules (I haven't decided which one >>> yet) for associations. So, say we have an interface called >>> ipsec_spd_type() which adds the parameter type to the attribute >>> ipsec_spd_types. Then we can have an allow setkey_t >>> ipsec_spd_types:association setkey; rule and we don't have to update it >>> every time more labeled network is added. >> >> That seems a lot less clunky than updating setkey every time we add a new association. >> >>> This is definitely wrong since its not a file: >>> +files_type(hadoop_lan_t) >> >> Let me know how you would like to handle associations and I could update the >> patch. > > Lets go with putting the associations in corenetwork. > >> Will the files_type error be cleared up when we re-engineer this? > > I'm not sure what you mean. The incorrect rule was added in your patch. > Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services. Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
2011-01-06 16:33:39 +00:00
')