When virsh is used to manage the virtual guests, the parent domain requires stream_connect rights towards the virtd_t
domain. This patch adds it in for initrc_t (for init scripts managing the environment) and sysadm_t (system
administrator).
Signed-off-by: Sven Vermeulen <sven.vermeulen@siphos.be>
When administering asterisk, one often ran command is "asterisk -r"
which yields the asterisk CLI (when the asterisk server is running). To
be able to run this, you need asterisk_stream_connect privileges.
Assign these privileges to the sysadm_r
Signed-off-by: Sven Vermeulen <sven.vermeulen@siphos.be>
The system administrator (in sysadm_t) is the only "user" domain that is
allowed to call portage-related services. So it also gains the privilege
to execute portage tree management functions (and as such transition to
portage_fetch_t).
Signed-off-by: Sven Vermeulen <sven.vermeulen@siphos.be>
The /sbin/rc binary is used by the system administrator to manage
runlevels (add/delete), check runlevel state, etc. all which do not
require a transition to occur. Hence the /sbin/rc (now labeled
rc_exec_t) is allowed to be executed without transitioning.
Signed-off-by: Sven Vermeulen <sven.vermeulen@siphos.be>
After a quick discussion with dominique, new attempt due to two issues:
1. No need (or even forbidden) to have "role $1 types foo_exec_t"
2. Suggestion to use the raid_run_mdadm name instead of raid_mdadm_role. The
idea here is to use raid_mdadm_role for prefixed domains (cfr. screen)
whereas raid_run_mdadm is to transition and run into a specific domain
Without wanting to (re?)start any discussion on prefixed versus non-prefixed
domains, such a naming convention could help us to keep the reference policy
cleaner (and naming conventions easy).
Also, refpolicy InterfaceNaming document only talks about run, not role.
So, without much further ado... ;-)
The system administrator (sysadm_r role) needs to use mdadm, but is not
allowed to use the mdadm_t type.
Rather than extend raid_domtrans_mdadm to allow this as well, use a
raid_mdadm_role (a bit more conform other role usages).
The other users of raid_domtrans_mdadm are all domains that run in system_r
role, which does have this type allowed (as per the system/raid.te
definition), so it wouldn't hurt to use raid_domtrans_mdadm for this.
Signed-off-by: Sven Vermeulen <sven.vermeulen@siphos.be>
Note, extra privileges may need to be granted to the samhain domain
if its default configuration file(/etc/samhainrc) is changed.
The samhain program could be used in the following way:
(In secadm_r role)
1. Initialize filesystem signature database:
newrole -l s15:c0.c1023 -p -- -c "samhain -t init"
(Note, the current secadm console will be blocked until
the database is completed)
2. Start samhain deamon to check filesystem integrity
newrole -l s15:c0.c1023 -p -- -c "samhain -t check -D"
3. Update filesystem signature database:
newrole -l s15:c0.c1023 -p -- -c "samhain -t update"
(In sysadm_r role)
1. Start samhain in daemon mode:
run_init /etc/init.d/samhain start
2. Stop samhain daemon:
run_init /etc/init.d/samhain stop
3. Check samhain daemon status:
run_init /etc/init.d/samhain status
4. Read/write samhain log files:
newrole -l s15:c0.c1023 -p -- -c "cat /var/log/samhain_log"
5. Remove samhain database files
newrole -l s15:c0.c1023 -p -- -c "rm /var/lib/samhain/samhain_file"
Note:
1. Stop samhain daemon before updating signature database.
2. Don't try to start samhain daemon twice.
3. Need to toggle SELinux into the Permissive mode in order to remove
the samhain_log files from /var/log/.
Signed-off-by: Harry Ciao <qingtao.cao@windriver.com>
Both the system administrator and the unprivileged user could use vlock
to lock the current console when logging in either from the serial console
or by ssh.
Signed-off-by: Harry Ciao <qingtao.cao@windriver.com>
* a pass cleaning up the style.
* adjusted some regular expressions in the file contexts: .* is the same as (.*)? since * means 0 or more matches.
* renamed a few interfaces
* two rules that I dropped as they require further explanation
> +files_read_all_files(hadoop_t)
A very big privilege.
and
> +fs_associate(hadoop_tasktracker_t)
This is a domain, so the only files with this type should be the /proc/pid ones, which don't require associate permissions.
On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote:
> On 10/04/10 13:15, Paul Nuzzi wrote:
>> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote:
>>> On 10/01/10 11:17, Paul Nuzzi wrote:
>>>> On 10/01/2010 08:02 AM, Dominick Grift wrote:
>>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote:
>>>>>> I updated the patch based on recommendations from the mailing list.
>>>>>> All of hadoop's services are included in one module instead of
>>>>>> individual ones. Unconfined and sysadm roles are given access to
>>>>>> hadoop and zookeeper client domain transitions. The services are started
>>>>>> using run_init. Let me know what you think.
>>>>>
>>>>> Why do some hadoop domain need to manage generic tmp?
>>>>>
>>>>> files_manage_generic_tmp_dirs(zookeeper_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_t)
>>>>
>>>> This has to be done for Java JMX to work. All of the files are written to
>>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while
>>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service
>>>> will end up owning the directory if it is not labeled tmp_t.
>>>
>>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like
>>>
>>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir)
>>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir)
>>>
>>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file)
>>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file)
>>>
>>
>> That looks like a better way to handle the tmp_t problem.
>>
>> I changed the patch with your comments. Hopefully this will be one of the last updates.
>> Tested on a CDH3 cluster as a module without any problems.
>
> There are several little issues with style, but it'll be easier just to fix them when its committed.
>
> Other comments inline.
>
I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making
tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments.
Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>