Project 'infra/sysadm-environment' was moved to 'swh/infra/sysadm-environment'. Please update any links and bookmarks that may still have the old path.
For managing access to the vault cache storage we need to setup an openvpn link between our machines and the unibo (= University of Bologna) host that mounts the storage.
@cmezzett we could neither confirm nor deny (cit.) that 1194/UDP is open from our machines all the way to 137.204.21.15{8,9}.
We are (now) sure it is open outbound from the machine we will use to initiate the connection (FQDN: swh1.inria.fr).
But we couldn't receive any answer from the 2 unibo IPs. Either the port is still firewalled on your end, or nobody is answering us there.
Now the port 1194 is open to the world, I was able to test it from another host outside our network and it works.
I suggest to wait until you can log in into the servers and set up the vpn before narrowing down the opening to your network.
we are now waiting to create the unibo accounts for myself, @ardumont and @olasd to establish a first access (action is on me on this, as soon as i've all the info needed to request account creation)
we will be root on the machines, only constraints are:
do not unlink the machines from the active directory clusters they are currently connected to
leave untouched root access (I'm assuming unibo can still access via authorized_keys as root)
via openvpn we can then link the machines to our puppet setup, if needed/desired
I've requested a new Juniper VPN profile for you in order to access the VMs for the startup configurations
errata corrige about the root account: the OS is an Ubuntu 16.04 so the root account is disabled by default, if you need an admin user not subject to sudo please create a new one
we have an administrative user used to grant access to the machine to inspect/debug the infrastructure should anomalies arise, just in case
the machines will mount the storage share via a dedicated ethernet interface on a private network
VM details:
swh-test.personale.dir.unibo.it (137.204.21.158)
swh-prod.personale.dir.unibo.it (137.204.21.159)
Should you need a DNS entry on unibo.it it will not be a problem
Or visit https://vpnssl.unibo.it with a browser equipped with a java runtime (good luck with that).
You'll need to sign in using the short version of the username (the samaccountname) e.g. john.doe2, this is also the case for the remote ssh access, e.g.:
@cmezzett I used the OpenConnect plugin for Network Manager to connect to the VPN successfully. (You need to choose VPN Protocol = "Juniper/Pulse Network Connect" and set vpnssl.unibo.it as endpoint).
ii libopenconnect5:amd64 7.08-1 amd64 open client for Cisco AnyConnect VPN - shared libraryii network-manager-openconnect 1.2.4-1 amd64 network management framework (OpenConnect plugin core)ii network-manager-openconnect-gnome 1.2.4-1 amd64 network management framework (OpenConnect plugin GNOME GUI)ii openconnect 7.08-1 amd64 open client for Cisco AnyConnect VPN
Once connected to the vpn I managed to connect to both machines and sudo to root.
I was trying to install OpenVPN on the swh-test and swh-prod machines, but it looks like outgoing packets with 1194 as the destination port are blocked by the firewall (not in the machines, the iptables rules are empty). I might be doing something wrong here, but I tried to listen on port 1194 on different hosts and I couldn't connect to them from swh-test or swh-prod.
@cmezzett After doing more tests, I'm pretty sure the destination port 1194 in UDP is blocked for both machines. Would it be possible to check the firewall? Thanks :-)
I have a problem with the NFS mountpoint: I'm using a non-root user for the service that reads and write on the vault, but I can't write on the NFS mountpoint. When I create a directory with the root user on the mountpoint, it gets created with uid/gid = 4294967294 (2^32-2), and I can't chown them to an uid/gid that my user would be able to write to:
root@swh-test:/mnt# mkdir testroot@swh-test:/mnt# chown 1019:1019 testchown: changing ownership of 'test': Operation not permittedroot@swh-test:/mnt# ls -ldrwxr-xr-x 2 4294967294 4294967294 2048 Dec 6 17:53 test
Is that a setting that you can disable on your side easily, or do we have a way to work around this behavior?