Deploy Kubernetes with ansible on Atomic

I've been playing with Project Atomic as a platform to run Docker containers for some time now. The reason I like Project Atomic is something for another blogpost. One of the reasons however, is that while it's a minimal OS, it does come with Python so I can use Ansible to do orchestration and configuration management.

Now, running Docker containers on a single host is nice, but the real fun starts when you can run containers spread over a number of hosts. This is easier said than done and requires some extra services like a scheduler, service discovery, overlay networking,... There are several solutions, but one that I particularly like is Kubernetes.

ProjectAtomic happens to ship with all necessary pieces needed to deploy a Kubernetes cluster using Flannel for the overlay networking. The only thing left is the configuration. Now this happens to be something Ansible is particularry good at.

The following wil describe how you can deploy a 4 node cluster on top of Atomic hosts using Ansible. Let's start with the Ansible inventory.

inventory

We will keep things simple here by using a single file-based inventory file where we explicitly specify the ip adresses of the hosts for testing purposes. The important part here are the 2 groups k8s-nodes and k8s-master. The k8s-master group should contain only one host which will become the cluster manager. All nodes under k8s-nodes will become nodes to run containers on.

[k8s-nodes]
atomic02 ansible_ssh_host=10.0.0.2
atomic03 ansible_ssh_host=10.0.0.3
atomic04 ansible_ssh_host=10.0.0.4


[k8s-master]
atomic01 ansible_ssh_host=10.0.0.1

Variables

Currently these roles don't have many variables that can be configured but we need to provide the variables for the k8s-nodes group. Create a folder group_vars with a file that has the same name of the group. If you checked out the repository you already have it.

$ tree group_vars/
group_vars/
    k8s-nodes

The file should have following variables defined.

skydns_enable: true

# IP address of the DNS server.
# Kubernetes will create a pod with several containers, serving as the DNS
# server and expose it under this IP address. The IP address must be from
# the range specified as kube_service_addresses.
# And this is the IP address you should use as address of the DNS server
# in your containers.
dns_server: 10.254.0.10

dns_domain: kubernetes.local

Playbook

Now that we have our inventory we can create our playbook. First we configure the k8s master node. Once this is configured we can configure the k8s nodes.

deploy_k8s.yml

 - name: Deploy k8s Master
   hosts: k8s-master
   remote_user: centos
   become: true
   roles:
     - k8s-master

 - name: Deploy k8s Nodes
   hosts: k8s-nodes
   remote_user: centos
   become: true
   roles:
     - k8s-nodes

Run the playbook.

  ansible-playbook -i hosts deploy_k8s.yml

If all ran without errors you should have your kubernetes cluster running. Lets see if we can connect to it. You will need kubectl. On Fedora you can install the kubernetes-client package.

$ kubectl --server=192.168.124.40:8080 get nodes
NAME              STATUS    AGE
192.168.124.166   Ready     20s
192.168.124.55    Ready     20s
192.168.124.62    Ready     19s

That looks good. Lets see if we can run a container on this cluster.

$ kubectl --server=192.168.124.40:8080 run nginx --image=nginx
replicationcontroller "nginx" created

Check the status:

$ kubectl --server=192.168.124.40:8080 get pods
NAME          READY     STATUS    RESTARTS   AGE
nginx-ri1dq   0/1       Pending   0          55s

If you see the pod status in state pending, just wait a few moments. If this is the first time you run the nginx container image, it needs to be downloaded first which can take some time. Once your pod is is running you can try to enter the container.

kubectl --server=192.168.124.40:8080 exec -ti nginx-ri1dq -- bash
root@nginx-ri1dq:/#

This a rather basic setup (no HA masters, no auth, etc..). The idea is to improve these Ansible roles and add more advanced configuration.

If you are interested and want to try it out yourself you can find the source here:

https://gitlab.com/vincentvdk/ansible-k8s-atomic.git

Adding new PHP versions to CentOS7 and ISPConfig

Adding PHP versions on CentOS7 and ISPConfig

Currently I'm using ISPConfig to manage serveral websites and the accompanying things like dns, mail, databases etc..

This setup runs on CentOS7 since that's my preffered OS. By default CentOS7 comes with php 5.4 which has gone EOL this September. A lot of the newer php based applications like Drupal8 want at least php 5.5 so it was time to update.

Since the default php version is supported and receives backports until the EOL of the CentOS release I decided to keep the default 5.4 version and to add the newer versions as an option. ISPConfig also provides a way to use multiple PHP version.

Software Collections.

The RHEL "ecosystem" has something called Software Collections for some time now and the goal is to have more up to date software available without having to update the default packages.

Install the Software Collections

Install the software collection utils.

yum install scl-utils

Install the php version you want to add by adding the scl repo. The link can be found on the software collections website.

rpm -ivh https://www.softwarecollections.org/en/scls/rhscl/php55/epel-7-x86_64/download/rhscl-php55-epel-7-x86_64.noarch.rpm

Install php packages

Next, install the php packages you need. In my setup I make use of php-fpm to run php applications.

yum install php55-php php55-php-mysqlnd php55-php-fpm php55-php-mbstring
php55-php-opcache

You can now test the php version by enabeling the software collection. Software Collections make use of a special file system hierarchy to avoid possible conflicts between a single Software Collection and the base system installation. These are stored in /opt/rn/

[root@scl-test ~]# scl enable php55 bash

Check the version.

[root@scl-test ~]# php -v
PHP 5.5.21 (cli) (built: Jun 26 2015 06:07:04)
Copyright (c) 1997-2014 The PHP Group
Zend Engine v2.5.0, Copyright (c) 1998-2014 Zend Technologies
[root@scl-test ~]#

Configure php-fmp and ISPConfig

To avaiod conflicts with my current running php-fpm service, we need to change the port of the php-fpm service from the collection.

sed -e 's/9000/9500/' -i /opt/rh/php54/root/etc/php-fpm.d/www.conf

If you have SELinux enabled you also need to execute

semanage port -a -t http_port_t -p tcp 9500

Now you

systemctl start php55-php-fpm

A critical view on Docker

TL;DR Before you start reading this, I want to make it clear that I absolutely don't hate Docker or the application container idea in general, at all!. I really see containers become a new way of doing things in addition to the existing technologies. In fact, I use containers myself more and more.

Currently I'm using Docker for local development because it's so easy to get your environment up and running in just e few seconds. But of course, that is "local" development. Things start to get interesting when you want to deploy over multiple Docker hosts in a production environment.

At the "Pragmatic Docker Day" a lot of people who were using (some even in production) or experimenting with Docker showed up. Other people were completely new to Docker so there was a good mix.

During the Open Spaces in the afternoon we had a group of people who decided to stay outside (the weather was really to nice to stay inside) and started discussing the talks that were given in the morning sessions. This evolved in a rather good discussion about everyone's personal view on the current state of containers and what they might bring in the future. People chimed in and added their opinion to the conversation

That inspired me to write about the following items which are a combination of the things that came up during the conversations and my own view on the current state of Docker.

The Docker file

A lot of people are now using some configuration management tool and have invested quite some time in their tool of choice to deploy and manage the state of their infrastructure. Docker provides the Dockerfile to build/configure your container images and it feels a bit like a "dirty" way/hack to do this given that config management tools provide some nice features.

Quite some people are using their config management tool to build their container images. I for instance upload my Ansible playbooks into the image (during build) and then run them. This allows me to reuse existing work I know that works. And I can use it for both containers and non-containers.

It would have been nice if Docker somehow provided a way to integrate the exiting configuration management tools a bit better. Vagrant does a better job here.

As far as I know you also can't use variables (think Puppet Hiera or Ansible Inventory) inside your Dockerfile. Something configuration management tools happen to do be very good at.

Bash scripting

When building more complex Docker images you notice that a lot of Bash scripting is used to prep the image and make it do what you want. Things like passing variables into configuration files, creating users, preparing storage, configure and start services, etc.. While Bash is not necessarily a bad thing, it all feels like a workaround for things that are so simple when not using containers.

Dev vs Ops all over again?

The people I talked to agreed on the fact that Docker is rather developer focused and that it allows them to build images containing a lot of stuff where you might have no control over. It abstracts away possible issues. The container works so all is well..right?

I believe that when you start building and using containers the DevOps aspect is more important then ever. If for instance a CVE is found in a library/service that has been included in the container image you'll need to update this in your base image and then rolled out through your deployment chain. To make this possible all stakeholders must know what is included, and in which version of the Docker image. Needless to say this needs both ops and devs working together. I don't think there's a need for "separation of concerns" as Docker likes to advocate. Haven't we learned that creating silo's isn't the best idea?

More complexity

Everything in the way you used to work becomes different once you start using containers. The fact that you can't ssh into something or let your configuration management make some changes just feels awkward.

Networking

By default Docker creates a Linux Bridge on the host where it creates interfaces for each container that gets started. It then adjusts the iptables nat table to pass traffic entering a port on the host to the exposed port inside the container.

To have a more advanced network configuration you need to look at tools like weave, flannel, etc.. Which require more research to see what fits your specific use case best.

Recently I was wondering if it was possible to have multiple nics inside your container because I wanted this to test Ansible playbooks that configure multiple nics. Currently it's not possible but there's a ticket open on GitHub https://github.com/docker/docker/issues/1824 which doesn't give me much hope.

Service discovery

Once you go beyond playing with containers on your laptop and start using multiple docker hosts to scale your applications, you need to have a way to know where the specific service you want to connect to is running and on what port it is running. You probably don't want to manually define ports per container on each host because that will become tedious quite fast. This is were tools like Consul, etcd etc.. come in. Again some extra tooling/complexity.

Storage

You will always have something that needs persistence and when you do, you'll need storage. Now, when using containers the Docker way, you are assumed to put as much as possible inside the container image. But some things like log files, configuration files, application generated data, etc.. are a moving target.

Docker provides volumes to pass storage from the host inside a container. Basically you map a path on the host to a path inside the container. But this poses some questions like, how do I share this in case the container gets started, how can I make sure this is secure? How do I manage all these volumes? What is the best way to share this among different hosts? ...

One way to consolidate your volumes is to use "data-only" containers. This means that you run a container with some volumes attached to it and then link to them from other containers so they all use a central place to store data. This works but has some drawbacks imho.

This container just needs to exist (it doesn't even need to be running) and as long as this container or a container that links to it exists, the volumes are kept on the system. Now, if you by accident delete the container holding the volumes or you delete the last container linking to them, you loose all your data. With containers coming and going, it can become tricky to keep track of this and making mistakes at this level has some serious consequences.

Security

Docker images

One of the "advantages" that Docker brings is the fact that you can pull images from the Docker hub and from what I have read this is in most cases encouraged. Now, everyone I know who runs a virtualization platform will never pull a Virtual Appliance and run it without feeling dirty. when using a cloud platform, chances are that you are using prebuild images to deploy new instances from. This is analogue to the Docker images with that difference that people who care about their infrastructure build their own images. Now most Linux distributions provide an "official" Docker image. These are the so called "trusted" images which I think is fine to use as a base image for everything else. But when I search the Docker Hub for Redis I get 1546 results. Do you trust all of them and would you use them in your environment?

What can go wrong with pulling an OpenVPN container. Right..?

This is also an interesting read: https://titanous.com/posts/docker-insecurity

User namespacing

Currently there's no user namespacing which means that if a UID inside the docker container matches the UID of a user on the host, that user will have access to the host with the same permissions. This is one of the reasons why you should not run processes as the root user inside containers (and outside). But even then you need to be careful with what you're doing.

Containers, containers, containers..

When you run more and more stuff in containers, you'll end up with a few hundred, thousand or even more containers. If you're lucky they all share the same base image. And even if they do, you still need to update them with fixes and security patches which results in newer base images. At this point all your existing containers should be rebuild and redeployed. welcome to the immutable world..

So the "problem" just shifts up a layer. A Layer where the developers have more control over what gets added. What do you do when the next OpenSSL bug pops up? Do you know which containers has which OpenSSL version..?

Minimal OS's

Everyone seems to be building these mini OS's these days like CoreOS, ProjectAtomic, RancherOS, etc.. The idea is that updating the base OS is a breeze (reboot, AB partition etc..) and all services we need are running inside containers.

That's all nice but people with a sysadmin background will quickly start asking questions like, can I do software raid? Can I add my own monitoring on this host? Can I integrate with my storage setup? etc...

Recap

What I wanted to point out is that when you decide to start using containers, keep in mind that this means you'll need to change your mindset and be ready to learn quite some new ways to do things.

While Docker is still young and has some shortcomings I really enjoy working with it on my laptop and use it for testing/CI purposes. It's also exciting (and scary at the same time) to see how fast all of this evolves.

I've been writing this post on and off for some weeks and recently some announcements at Dockercon might address some of the above issues. Anyway, if you've read until here, I want to thank you and good luck with all your container endeavors.

Ansible and Opennebula

Recently we decided to deploy a private cloud to replace our RHEV setup. The reasoning behind this will be covered in an other blog post, but the main reason was the higher level of automation we could achieve with Opennebula compared to RHEV. In this post I would like to talk about how we used Ansible to help us with the setup of Opennebula and what we are going to do in the near future.

Why Ansible? Well, we were already using Ansible to perform repeatable deployments in our test environments to save us some valuable time compared to "manual" setups. This way we can test new code or deploy complete test environments faster.

So when we decided to deploy Opennebula we started writing ansible playbooks from the first start because we wanted to test several setups until we had a configuration that we found performant enough and was configured the way we wanted. This allowed us to rebuild the complete setup from scratch (using Cobbler for physical deployments) and have a fresh setup 30min later. This included a fully configured setup with Opennebula Management Node, Hypervisors(kvm) and everything we needed to further configure our Gluster storage backend.

One of the advantages of Ansible is that it is not just a configuration management tool but can do orchestration to. Opennebula for example uses SSH to communicate to all the hypervisor nodes. So during the deployment of a hypervisor node we use the delegate_to module to fetch the earlier generated ssh keys and deploy them on the hypervisor. Pretty convenient..

We currently have quite complete playbooks that use a combination of 3 roles. They do need some testing and when we feel they can be used by other people too, we'll put them on the Ansible Galaxy.

  • one_core : configures the base for both KVM nodes and the sunstone service
  • one_sunstone : configures the Sunstone UI service
  • one_kvmnode : configures the hypervisor

Until now we haven't used Ansible to keep our config in sync or to do updates, but it's something we have in the pipeline and should be quite trivial using the current Ansible playbooks.

Another thing we'll start working on are modules to support Opennebula. We already had a look at the possibilities Opennebula provides and should be quite trivial to build using its API.

We are very pleased with both projects as they aim to keep things simple which is important to us since we are a very small team and have to move forward at a rather fast pace.

The playbooks can be found on github

Backup Zarafa with Bacula

Last week I finished migrating our mail/collaboration platform to Zarafa, and as with all things this needs to be backed up. We're running the Zarafa Enterprise edition which come's with a backup tool called zarafa-backup which works like this :

The first time you run the zarafa-backup tool it creates a data file and an index file refering to the items (folders and mails) inside the data file.

The next time you run zarafa-backup it detects the existing files and creates an incremental data file and updates the corresponding index file. It keeps doing this until you delete the data files and index file. Then it wil create a new full backup and the cycle will start all over.

We are using Bacula to do our backups so I needed to work something out.

As stated earlier, zarafa-backup just keeps on creating incrementals which means that if you keep this running a restore will involve restoring a lot of incrementals first. This is not something I wanted...

So I made my schedule like this :

  • create a full backup on Friday evening. That way we have the weekend to run the backup.
  • Until the next Friday we let zarafa-backup creating incrementals in the working folder.
  • On the next Friday we move the complete set to an other folder ( I called it weekly) and back it up. If this is successfull we empty the weekly folder again. Then we run zarafa-backup again which creates a new full backup (since the complete set has been moved and the working directory is empty).

Bacula schedule

Two schedules are created, each whith their own storage pool. * One we run on Friday. * One we run all the other days.

schedule {
        Name = "zarafa-dly"
    Run = Level=full pool=ZDLY-POOL sat-thu at 19:00
}   
schedule { 
    Name = "zarafa-wkly"
    Run = Level=full pool=ZWKLY-POOL fri at 19:00
}

Bacula Zarafa client

The client config has 2 jobs defined. * One that does the daily backups using the "zarafa-dly" schedule. * One that does the backups of the weekly sets using the "zarafa-wkly" schedule. Each job runs a script before the backup run. The second job that backups the weekly sets also has a script that runs after the backup has been made. This script empties the weekly folder.

Job {
        Name ="MAIL02-DLY"
        FileSet="ZARAFA-STORES"
        Client = mail-02
        Storage = TapeRobot
        Write Bootstrap = "/var/lib/bacula/%c.bsr"
        Messages = Standard
        Schedule = "zarafa-dly"
        Type = Backup
        Pool = ZDLY-POOL
        ClientRunBeforeJob = "/etc/bacula/zbackup.sh"
        Run After Job = "/scripts/bacula2nagios \"%n\" 0 \"%e %l %v\""
        Run After Failed Job = "/scripts/bacula2nagios \"%n\" 1 \"%e %l %v\""
}

job {
        Name ="MAIL02-WKLY"
        FileSet="ZARAFA-WEEKLY-STORES"
        Client = mail-02
        Storage = TapeRobot
        Write Bootstrap = "/var/lib/bacula/%c.bsr"
        Messages = Standard
        Schedule = "zarafa-wkly"
        Type = Backup
        Pool = ZWKLY-POOL
        ClientRunBeforeJob = "/etc/bacula/zbackup.sh"
    Client Run After Job = "/etc/bacula/zbackup-cleanup.sh"
    Run After Job = "/scripts/bacula2nagios \"%n\" 0 \"%e %l %v\""
        Run After Failed Job = "/scripts/bacula2nagios \"%n\" 1 \"%e %l %v\""
}

Backup script

#!/bin/bash

#Variables
ZBFOLDER=/zarafa_backup/working
WEEKLYFOLDER=/zarafa_backup/weekly
DRFOLDER=/zarafa_backup/dr
WEEK=`date +%W`

#check if it's Friday or if the folder is empty
if [ `date +%w` -eq 5 -a `ls -A $ZBFOLDER | wc -l` -eq 0 ]; then
    echo "Starting Full backup"
    zarafa-backup -a -o $ZBFOLDER
  elif [ `date +%w` -eq 5 -a `ls -A $ZNBFOLDER | wc -l` -ne 0 ];then
    echo "Copying working to weekly and start new Full backup"
    mkdir -p $WEEKLYFOLDER/week-$WEEK
    cp $ZBFOLDER/* $WEEKLYFOLDER/week-$WEEK
    rm -f $ZBFOLDER/*
    zarafa-backup -a -o $ZBFOLDER
  else
    echo "Starting Incremental backup"
    zarafa-backup -a -o $ZBFOLDER
fi

{% endhighlight %}

### cleanup script
{% highlight bash %}
#!/bin/bash
#cleanup the weekly folder after bacula has run
WEEKLYFOLDER=/zarafa_backup/weekly

rm -rf $WEEKLYFOLDER/*

Detect MTU size when using Jumbo Frames

Recently I've setup an iSCSI target based on RHEL6 + tgt. After adding Logical Volumes to a target in the tgtd config file, the iSCSI target was discoverable and ready for use.

After testing this setup for a few days I wanted to tune the network traffic by enabeling Jumbo Frames. If you search on the interwebz you'll most likely find information about adding "MTU=9000" ( for RHEL based clones) to the config file of the network interface.

The problem with Jumbo Frames is that when setting the mtu to high, you get fragmentation. Changing your mtu to 9000 will probably lead to fragmentation. If you don't know this it can be quite hard to troubleshoot because you can still use ssh, ping the target etc.. but the iSCSI targets will keep failing.

You can easily check this with good old ping. Running this:

ping -M do -s 9000 <target_ip>

  • -M : MTU discovery strategy. "do" means "prohibit fragmentation"
  • -s : here you can specify the packet size

Gave me the following result :

From 10.0.0.13 icmp_seq=1 Frag needed and DF set (mtu = 9000)

Lower the packet size until you get a normal ping reply. This is the value you can use as your mtu size in your network card's config file.

ping -M do -s 8900 <target_ip>

RHEV setup

This blog post comes a little late because I did this RHEV setup at our company more than 6 months ago and it has been living in the drafts folder for some time now. Now with RHEV 3.0 Beta released I tought it's time to publish this.

About a year and a half ago we started looking at alternatives for our VMWare ESXi setup because we wanted to add hypervisor nodes to our 2 existing nodes running VMWare ESXi. We also wanted the ability to live migrate vm's between the nodes. At the same time Red Hat released RHEV 2.1 and being a Red Hat partner we decided to evaulate it.

We extended our existing setup with 2 Supermicro servers and a Supermicro SATA disk based SAN box configured as an iSCSI target providing around 8TB of usable storage.

Migration

To migrate our existing VM's running on VMWare we used the virt-v2v tool that converts and moves VMWare machines to RHEV. This procedure can be scripted so you can define a set of VM's you want to migrate in one go. Unfortunate these VM's need to be powerd down. I noticed that if your vmdk folders/files are scattered around on you storage including differend folder names, the virt-v2v tool in some cases bails out. In our case I could understand why the tool refused to migrate some machines (it was quite a mess).

Hypervisors

You have 2 options to install the hypervisor nodes :

  • RHEV-H : stripped RHEL with a 100MB foorprint that provides enough to function as a hypervisor node.
  • RHEL : a default RHEL install you can configure yourself.

We created a custom profile on our Kickstart server so we could easily deploy hypervisors nodes based on a standard RHEL. By using a standard RHEL you can install additional packages later on which is not the case with a RHEV-H based install.

Once installed you can then add this node from within the manager interface to your cluster. Once added it will automatically install the necessary packages and becomes active in the cluster.

Storage

After adding hypervisor nodes you need to create "Storage Domains" based on either NFS, FC or iSCSI. Besides Storage Domains you also need to define an ISO domain to stock your installation images. If you want to migrate VM's from VMWare or other RHEV clusters you need to create an Export Domain.

In each cluster one hypervisor node automatically gets the SPM (Storage Pool Manager) role defined. This host keeps track of where storage is assigned to. As soon as this host is put in maintenance or becomes unavailable another host in the cluster will take over the SPM role.

VM's can use Preallocated disks (RAW) or Thin Provisioning (QCOW). For best performance Preallocated is recommended.

conclusion

We have been running this setup for more than a year now and haven't had any real issues with it. We actually filed 2 support cases which have been resolved in newer releases of RHEV. At the moment we run around 100 VM's and although I haven't run any benchmarks yet, I see no real difference with our VMWare setup using FC storage. Although the product still has some drawbacks I believe it has a solid base to build on and already has some nice features like Live Migration, Load Balancing, Thin provisioning,..

Cons

  • RHEV-M (manager) runs on Windows
  • RHEV-M can only be accessed via IE (will probably change in 3.1)
  • Storage part is quite confusing at first.
  • API only accesible via Powershell
  • no live snapshots

In a few weeks I'll probably start testing RHEV 3.0 which now runs on Linux on JBOSS. This makes me think if JBOSS clustering will work to get RHEV-M working in a HA setup.

Switched to Jekyll

It has been a while since I last blogged about a "decent" topic and actually it's been a while blogging about anything. The reason is the lack of time and also some lazyness. But that should change now, and the first step I took was migrating my blog from Drupal to a Jekyll generated website. Not that Drupal is bad or anything, but it's quite overkill and somehow felt not really productive while creating content.

So how did I end up with Jekyll?

Because I like using plain text files for writing (I use Latex quite a lot) I started looking for a blogging tool that used plain text files to store it's content instead of a database. PyBloxsom, Blosxom came to mind, but then Jekyll popped up in one of my search results and immediatly liked it because it generates static content you can upload to any webserver. No more php, python, perl, Mysql or updating needed. However, you do need Ruby on the machine that does the generation.. One "drawback" of a static website is commenting and for a moment I was planning on dropping comments on my blog but went for Disqus which I actually quite like.

Now I have my blog stored in a git repository that rsyncs the static content to my webserver when I push my changes. As simple as that.

I really like the thought of using Markdown and vim to write my blogposts from now on (and of course the geeky factor of all this). The only thing left is improving the layout and sanatizing the setup a bit more

I'll be at LOAD (Linux Open Administrator Days)

LOADays

Getting DropBox to work with SELinux

Recently Serge mentioned DropBox to me, and I remembered creating an account once but haven't used or installed it in the last 2 years.

These days you also get lot more free space with your DropBox so I decided to start using it again.

So I started installing DropBox using the rpm from their website, but got an SELinux warning. Setroubleshootd perfectly explains what's going on and the solution is trivial.

[root@localhost ~]# semanage fcontext -a -t execmem_exec_t '/home/vincent/.dropbox-dist/dropbox' [root@localhost ~]# restorecon -vvF '/home/vincent/.dropbox-dist/dropbox' restorecon reset /home/vincent/.dropbox-dist/dropbox context unconfined_u:object_r:user_home_t:s0->system_u:object_r:execmem_exec_t:s0

Page 1 / 5 »