As of util-linux-n 2.18, the mount utility now attempts to write to the root
of newly mounted filesystems. It does this in an attempt to ensure that the
r/w status of a filesystem as shown in mtab is correct. To detect whether
a filesystem is r/w, mount calls access() with the W_OK argument. This
results in an AVC denial with current policy. As a fallback, mount also
attempts to modify the access time of the directory being mounted on if
the call to access() fails. As mount already possesses the necessary
privileges, the modification of the access time succeeds (at least on systems
with the futimens() function, which has existed in linux since kernel 2.6.22
and glibc since version 2.6, or about July 2007).
Signed-off-by: Chris Richards <gizmo@giz-works.com>
Both the system administrator and the unprivileged user could use vlock
to lock the current console when logging in either from the serial console
or by ssh.
Signed-off-by: Harry Ciao <qingtao.cao@windriver.com>
* a pass cleaning up the style.
* adjusted some regular expressions in the file contexts: .* is the same as (.*)? since * means 0 or more matches.
* renamed a few interfaces
* two rules that I dropped as they require further explanation
> +files_read_all_files(hadoop_t)
A very big privilege.
and
> +fs_associate(hadoop_tasktracker_t)
This is a domain, so the only files with this type should be the /proc/pid ones, which don't require associate permissions.
On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote:
> On 10/04/10 13:15, Paul Nuzzi wrote:
>> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote:
>>> On 10/01/10 11:17, Paul Nuzzi wrote:
>>>> On 10/01/2010 08:02 AM, Dominick Grift wrote:
>>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote:
>>>>>> I updated the patch based on recommendations from the mailing list.
>>>>>> All of hadoop's services are included in one module instead of
>>>>>> individual ones. Unconfined and sysadm roles are given access to
>>>>>> hadoop and zookeeper client domain transitions. The services are started
>>>>>> using run_init. Let me know what you think.
>>>>>
>>>>> Why do some hadoop domain need to manage generic tmp?
>>>>>
>>>>> files_manage_generic_tmp_dirs(zookeeper_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_t)
>>>>
>>>> This has to be done for Java JMX to work. All of the files are written to
>>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while
>>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service
>>>> will end up owning the directory if it is not labeled tmp_t.
>>>
>>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like
>>>
>>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir)
>>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir)
>>>
>>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file)
>>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file)
>>>
>>
>> That looks like a better way to handle the tmp_t problem.
>>
>> I changed the patch with your comments. Hopefully this will be one of the last updates.
>> Tested on a CDH3 cluster as a module without any problems.
>
> There are several little issues with style, but it'll be easier just to fix them when its committed.
>
> Other comments inline.
>
I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making
tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments.
Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>