Various changes to the Xen userspace policy, including:
- Add gntdev and gntalloc device node labeling.
- Create separate domains for blktap and qemu-dm rather than leaving them in xend_t.
- No need to allow xen userspace to create its own device nodes anymore;
this is handled automatically by the kernel/udev.
- No need to allow xen userspace access to generic raw storage; even if
using dedicated partitions/LVs for disk images, you can just label them
with xen_image_t.
The blktap and qemu-dm domains are stubs and will likely need to be
further expanded, but they should definitely not be left in xend_t. Not
sure if I should try to use qemu_domain_template() instead for qemu-dm,
but I don't see any current users of that template (qemu_t uses
virt_domain_template instead), and qemu-dm has specific interactions
with Xen.
Signed-off-by: Stephen Smalley <sds@tycho.nsa.gov>
Add the support to login and use the system from /dev/console.
1. Make gettty_t able to use the /dev/console;
2. Make local_login_t able to relabel /dev/console to user tty types;
3. Provide the type_change rule for relabeling /dev/console.
All above supports are controlled by the allow_console tunable.
Signed-off-by: Harry Ciao <qingtao.cao@windriver.com>
The attached patch adds a few database object classes, as follows:
* db_schema
------------
A schema object performs as a namespace in database; similar to
directories in filesystem.
It seems some of (but not all) database objects are stored within
a certain schema logically. We can qualify these objects using
schema name. For example, a table: "my_tbl" within a schema: "my_scm"
is identified by "my_scm.my_tbl". This table is completely different
from "your_scm.my_tbl" that it a table within a schema: "your_scm".
Its characteristics is similar to a directory in filesystem, so
it has similar permissions.
The 'search' controls to resolve object name within a schema.
The 'add_name' and 'remove_name' controls to add/remove an object
to/from a schema.
See also,
http://developer.postgresql.org/pgdocs/postgres/sql-createschema.html
In the past discussion, a rubix folks concerned about no object
class definition for schema and catalog which is an upper level
namespace. Since I'm not certain whether we have a disadvantage
when 'db_schema' class is applied on catalog class, I don't add
this definition yet.
Default security context of 'db_table' and 'db_procedure' classes
get being computed using type_transition with 'db_schema' class,
instead of 'db_database' class. It reflects logical hierarchy of
database object more correctly.
* db_view
----------
A view object performs as a virtual table. We can run SELECT
statement on views, although it has no physical entities.
The definition of views are expanded in run-time, so it allows
us to describe complex queries with keeping readability.
This object class uniquely provides 'expand' permission that
controls whether user can expand this view, or not.
The default security context shall be computed by type transition
rule with a schema object that owning the view.
See also,
http://developer.postgresql.org/pgdocs/postgres/sql-createview.html
* db_sequence
--------------
A sequence object is a sequential number generator.
This object class uniquely provides 'get_value', 'next_value' and
'set_value' permissions. The 'get_value' controls to reference the
sequence object. The 'next_value' controls to fetch and increment
the value of sequence object. The 'set_value' controls to set
an arbitrary value.
The default security context shall be computed by type transition
rule with a schema object that owning the sequence.
See also,
http://developer.postgresql.org/pgdocs/postgres/sql-createsequence.html
* db_language
--------------
A language object is an installed engine to execute procedures.
PostgreSQL supports to define SQL procedures using regular script
languages; such as Perl, Tcl, not only SQL or binary modules.
In addition, v9.0 or later supports DO statement. It allows us to
execute a script statement on server side without defining a SQL
procedure. It requires to control whether user can execute DO
statement on this language, or not.
This object class uniquely provides 'implement' and 'execute'
permissions. The 'implement' controls whether a procedure can
be implemented with this language, or not. So, it takes security
context of the procedure as subject. The 'execute' controls to
execute code block using DO statement.
The default security context shall be computed by type transition
rule with a database object, because it is not owned by a certain
schema.
In the default policy, we provide two types: 'sepgsql_lang_t' and
'sepgsql_safe_lang_t' that allows unpriv users to execute DO
statement. The default is 'sepgsql_leng_t'.
We assume newly installed language may be harm, so DBA has to relabel
it explicitly, if he want user defined procedures using the language.
See also,
http://developer.postgresql.org/pgdocs/postgres/sql-createlanguage.htmlhttp://developer.postgresql.org/pgdocs/postgres/sql-do.html
P.S)
I found a bug in MCS. It didn't constraint 'relabelfrom' permission
of 'db_procedure' class. IIRC, I fixed it before, but it might be
only MLS side. Sorry.
Thanks,
--
KaiGai Kohei <kaigai@ak.jp.nec.com>
policy/flask/access_vectors | 29 ++++++++
policy/flask/security_classes | 6 ++
policy/mcs | 16 ++++-
policy/mls | 58 ++++++++++++++-
policy/modules/kernel/kernel.if | 8 ++
policy/modules/services/postgresql.if | 125 +++++++++++++++++++++++++++++++--
policy/modules/services/postgresql.te | 116 +++++++++++++++++++++++++++++-
7 files changed, 342 insertions(+), 16 deletions(-)
On 01/05/2011 08:48 AM, Christopher J. PeBenito wrote:
> On 12/16/10 12:32, Paul Nuzzi wrote:
>> On 12/15/2010 03:54 PM, Christopher J. PeBenito wrote:
>>> On 12/10/10 18:22, Paul Nuzzi wrote:
>>>> Added labeled IPSec support to hadoop. SELinux will be able to enforce what services are allowed to
>>>> connect to. Labeled IPSec can enforce the range of services they can receive from. This enforces
>>>> the architecture of Hadoop without having to modify any of the code. This adds a level of
>>>> confidentiality, integrity, and authentication provided outside the software stack.
>>>
>>> A few things.
>>>
>>> The verb used in Reference Policy interfaces for peer recv is recvfrom
>>> (a holdover from previous labeled networking implementations). So the
>>> interfaces are like hadoop_recvfrom_datanode().
>>
>> Easy change.
>>
>>> It seems like setkey should be able to setcontext any type used on ipsec
>>> associations. I think the best thing would be to add additional support
>>> to either the ipsec or corenetwork modules (I haven't decided which one
>>> yet) for associations. So, say we have an interface called
>>> ipsec_spd_type() which adds the parameter type to the attribute
>>> ipsec_spd_types. Then we can have an allow setkey_t
>>> ipsec_spd_types:association setkey; rule and we don't have to update it
>>> every time more labeled network is added.
>>
>> That seems a lot less clunky than updating setkey every time we add a new association.
>>
>>> This is definitely wrong since its not a file:
>>> +files_type(hadoop_lan_t)
>>
>> Let me know how you would like to handle associations and I could update the
>> patch.
>
> Lets go with putting the associations in corenetwork.
>
>> Will the files_type error be cleared up when we re-engineer this?
>
> I'm not sure what you mean. The incorrect rule was added in your patch.
>
Adds labeled IPSec policy to hadoop to control the remote processes that are allowed to connect to the cloud's services.
Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
As of util-linux-n 2.18, the mount utility now attempts to write to the root
of newly mounted filesystems. It does this in an attempt to ensure that the
r/w status of a filesystem as shown in mtab is correct. To detect whether
a filesystem is r/w, mount calls access() with the W_OK argument. This
results in an AVC denial with current policy. As a fallback, mount also
attempts to modify the access time of the directory being mounted on if
the call to access() fails. As mount already possesses the necessary
privileges, the modification of the access time succeeds (at least on systems
with the futimens() function, which has existed in linux since kernel 2.6.22
and glibc since version 2.6, or about July 2007).
Signed-off-by: Chris Richards <gizmo@giz-works.com>
* a pass cleaning up the style.
* adjusted some regular expressions in the file contexts: .* is the same as (.*)? since * means 0 or more matches.
* renamed a few interfaces
* two rules that I dropped as they require further explanation
> +files_read_all_files(hadoop_t)
A very big privilege.
and
> +fs_associate(hadoop_tasktracker_t)
This is a domain, so the only files with this type should be the /proc/pid ones, which don't require associate permissions.
On 10/04/2010 02:18 PM, Christopher J. PeBenito wrote:
> On 10/04/10 13:15, Paul Nuzzi wrote:
>> On 10/01/2010 01:56 PM, Christopher J. PeBenito wrote:
>>> On 10/01/10 11:17, Paul Nuzzi wrote:
>>>> On 10/01/2010 08:02 AM, Dominick Grift wrote:
>>>>> On Thu, Sep 30, 2010 at 03:39:40PM -0400, Paul Nuzzi wrote:
>>>>>> I updated the patch based on recommendations from the mailing list.
>>>>>> All of hadoop's services are included in one module instead of
>>>>>> individual ones. Unconfined and sysadm roles are given access to
>>>>>> hadoop and zookeeper client domain transitions. The services are started
>>>>>> using run_init. Let me know what you think.
>>>>>
>>>>> Why do some hadoop domain need to manage generic tmp?
>>>>>
>>>>> files_manage_generic_tmp_dirs(zookeeper_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_initrc_t)
>>>>> files_manage_generic_tmp_files(hadoop_$1_t)
>>>>> files_manage_generic_tmp_dirs(hadoop_$1_t)
>>>>
>>>> This has to be done for Java JMX to work. All of the files are written to
>>>> /tmp/hsperfdata_(hadoop/zookeeper). /tmp/hsperfdata_ is labeled tmp_t while
>>>> all the files for each service are labeled with hadoop_*_tmp_t. The first service
>>>> will end up owning the directory if it is not labeled tmp_t.
>>>
>>> The hsperfdata dir in /tmp certainly the bane of policy writers. Based on a quick look through the policy, it looks like the only dir they create in /tmp is this hsperfdata dir. I suggest you do something like
>>>
>>> files_tmp_filetrans(hadoop_t, hadoop_hsperfdata_t, dir)
>>> files_tmp_filetrans(zookeeper_t, hadoop_hsperfdata_t, dir)
>>>
>>> filetrans_pattern(hadoop_t, hadoop_hsperfdata_t, hadoop_tmp_t, file)
>>> filetrans_pattern(zookeeper_t, hadoop_hsperfdata_t, zookeeper_tmp_t, file)
>>>
>>
>> That looks like a better way to handle the tmp_t problem.
>>
>> I changed the patch with your comments. Hopefully this will be one of the last updates.
>> Tested on a CDH3 cluster as a module without any problems.
>
> There are several little issues with style, but it'll be easier just to fix them when its committed.
>
> Other comments inline.
>
I did my best locking down the ports hadoop uses. Unfortunately the services use high, randomized ports making
tcp_connect_generic_port a must have. Hopefully one day hadoop will settle on static ports. I added hadoop_datanode port 50010 since it is important to lock down that service. I changed the patch based on the rest of the comments.
Signed-off-by: Paul Nuzzi <pjnuzzi@tycho.ncsc.mil>
Retry: forgot to include attribute mmap_low_domain_type attribute to domain_mmap_low() :
Inspired by similar implementation in Fedora.
Wine and vbetool do not always actually need the ability to mmap a low area of the address space.
In some cases this can be silently denied.
Therefore introduce an interface that facilitates "mmap low" conditionally, and the corresponding boolean.
Also implement booleans for wine and vbetool that enables the ability to not audit attempts by wine and vbetool to mmap a low area of the address space.
Rename domain_mmap_low interface to domain_mmap_low_uncond.
Change call to domain_mmap_low to domain_mmap_low_uncond for xserver_t. Also move this call to distro redhat ifndef block because Redhat does not need this ability.
Signed-off-by: Dominick Grift <domg472@gmail.com>
dontaudit attempts to read/write device_t chr files occurring before udev relabel
allow init_t and initrc_t read/write on device_t chr files (necessary to boot without unconfined)
Signed-off-by: Jeremy Solt <jsolt@tresys.com>
Move devtmpfs to devices module (remove from filesystem module)
Make device_t a filesystem
Add interface for associating types with device_t filesystem (dev_associate)
Call dev_associate from dev_filetrans
Allow all device nodes associate with device_t filesystem
Remove dev_tmpfs_filetrans_dev from kernel_t
Remove fs_associate_tmpfs(initctl_t) - redundant, it was in dev_filetrans, now in dev_associate
Mounton interface, to allow the kernel to mounton device_t
Signed-off-by: Jeremy Solt <jsolt@tresys.com>
We went back and reread the bindreservport code in glibc.
Turns out the range or ports that this will reserve are 512-1024 rather
then 600-1024.
The code actually first tries to reserve a port from 600-1024 and if
they are ALL reserved will try 512-599.
So we need to change corenetwork to reflect this.