Added mono_run for unconfined and also xserver_role and allow it to dbus
chat with xdm.
Allow sysadm_t to read kmsg.
Allow user domains to dbus chat with kerneloops for the kerneloops desktop
gui. Also allow them to chat with devicekit disk and power daemons.
Allow gconfd_t to read /var/lib/gconf/defaults and /proc/filesystems
When an unconfined user uses truecrypt to mount an encrypted file, dmsetup is
called to setup a new device. This program works with udev to configure the
new device and uses SysV semaphores to synchronize states. As udev runs
dmsetup in lvm_t domain, the first dmsetup process needs to create lvm_t
semaphores (not unconfined_t) and hence needs to run in lvm_t domain.
More details are available in the archives on the ML:
http://oss.tresys.com/pipermail/refpolicy/2014-May/007111.html
dpkg is now using rpm_execcon()/setexecfilecon()-like function to
transition to the dpkg_script_t domain. This function will fail in
enforcing mode if the transition is not allowed.
Make it consistent with sysadm_r:sysadm_t.
If you build targeted policy then consider direct_initrc=y
If you build with direct_initrc=n then both unconfined_r:unconfined_t,
as well as sysadm_r:sysadm_t rely on run_init for running services on
behalf of the system.
Signed-off-by: Dominick Grift <dominick.grift@gmail.com>
It would not be sufficient in the current shape anyways because
unconfined_r is not associated with xserver_t
Signed-off-by: Dominick Grift <dominick.grift@gmail.com>
The unconfined user is currently not allowed to call portage-related
functions. However, in a targeted system (with unconfined domains
enabled), users (including administrators) should be allowed to
transition to the portage domain.
We position the portage-related calls outside the "ifdef(distro_gentoo)"
as other distributions support Portage as well.
Signed-off-by: Sven Vermeulen <sven.vermeulen@siphos.be>
* a pass cleaning up the style.
* adjusted some regular expressions in the file contexts: .* is the same as (.*)? since * means 0 or more matches.
* renamed a few interfaces
* two rules that I dropped as they require further explanation
> +files_read_all_files(hadoop_t)
A very big privilege.
and
> +fs_associate(hadoop_tasktracker_t)
This is a domain, so the only files with this type should be the /proc/pid ones, which don't require associate permissions.
On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote:
> On 10/04/10 13:15, Paul Nuzzi wrote:
>> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote:
>>> On 10/01/10 11:17, Paul Nuzzi wrote:
>>>> On 10/01/2010 08:02 AM, Dominick Grift wrote:
>>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote:
>>>>>> I updated the patch based on recommendations from the mailing list.
>>>>>> All of hadoop's services are included in one module instead of
>>>>>> individual ones. Unconfined and sysadm roles are given access to
>>>>>> hadoop and zookeeper client domain transitions. The services are started
>>>>>> using run_init. Let me know what you think.
>>>>>
>>>>> Why do some hadoop domain need to manage generic tmp?
>>>>>
>>>>> files_manage_generic_tmp_dirs(zookeeper_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_t)
>>>>
>>>> This has to be done for Java JMX to work. All of the files are written to
>>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while
>>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service
>>>> will end up owning the directory if it is not labeled tmp_t.
>>>
>>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like
>>>
>>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir)
>>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir)
>>>
>>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file)
>>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file)
>>>
>>
>> That looks like a better way to handle the tmp_t problem.
>>
>> I changed the patch with your comments. Hopefully this will be one of the last updates.
>> Tested on a CDH3 cluster as a module without any problems.
>
> There are several little issues with style, but it'll be easier just to fix them when its committed.
>
> Other comments inline.
>
I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making
tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments.
Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
The unconfined role is running java in the unconfined_java_t. The current
policy only has a domtrans interface, so the unconfined_java_t domain is not
added to unconfined_r. Add a run interface and change the unconfined module
to use this new interface.