[OpenShift] – SDN Configuration

Understanding

The SDN configuration is performed in order to restrict the access/communication between the components. Following the configuration variables in the [OSEv3:vars] definition in the inventory which will specify it.

# The following defined the type of SDN plugin which could be for example 'redhat/openshift-ovs-networkpolicy' 
# See more about it here: https://docs.openshift.com/container-platform/3.11/install_config/configuring_sdn.html 
os_sdn_network_plugin_name='redhat/openshift-ovs-subnet'
image-1529810333696
Following the types explanation.
  • openshift-ovs-subnet: Most permissive one where every pod can communicate with every other pod and service
  • openshift-ovs-multitenant: This configuration will isolate the pods by the project which means that the pods and services of each project cannot send data to pods/services of another project. It occurs because each pod and service receives a unique virtual network ID (VNID) which is the variable to identify the project owner and allow or not the communication between them.

tips It is possible to join projects, then besides the cluster has this configuration specific pods/service of the determinate projects which were joined will be able to reach out themselves. To do it use the following command.

oc adm pod-network join-projects --to=projecta-dev projectb-dev
  • openshift-ovs-networkpolicy: It is the less permissive which means that the isolation will be performed in the pod/service level.

Updating type in the cluster

Following an example to change a setup with openshift-ovs-multitenant to the openshift-ovs-subnet type

    1. Update the inventory
      # The following defined the type of SDN plugin which could be for example 'redhat/openshift-ovs-networkpolicy' 
      # See more about it here: https://docs.openshift.com/container-platform/3.11/install_config/configuring_sdn.html 
      os_sdn_network_plugin_name='redhat/openshift-ovs-subnet'
    2. Change the configuration on the master(s)
      $ ansible masters -m shell -a "sed -i -e 's/openshift-ovs-multitenant/openshift-ovs-subnet/g' /etc/origin/master/master-config.yaml"
    3. Restart the master(s) to reflect the change
      $ ansible masters -m shell -a "/usr/local/bin/master-restart api" 
      $ ansible masters -m shell -a "/usr/local/bin/master-restart controllers"
    4. Change the configuration on the node_group config in the maps
      $ oc get cm -n openshift-node -o yaml | sed -e 's/ovs-.*/ovs-subnet/' | oc apply -f -
    5. Restart the node(s)
      $ ansible nodes -m shell -a "systemctl restart atomic-openshift-node"
    6. Remove the previous SDN pod (You can check the pod via $oc get netnamespaces)
      $ oc delete pod -n openshift-sdn --all

Checking the communication

Following some steps/commands which can be helpful to check if one pod are able to connect/access another one.
  • Get the IP of the pods
# Navegate in the projects via $ oc project 
# List the pods via $ oc get pods
# Run the following command to get the IP of the pod
$ oc describe pod  | egrep 'IP|Node:'
  • Go inside of the pod which you would like to check if will access the other one.
$ oc rsh
  • Use the curl to check the connectivity
$ curl IP:PORT -m 1 (E.g curl 10.1.14.8:8080 -m 1)
Following some references which can help.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s