The limitations imposed by software-layer networking on Public Cloud

The benefits of public cloud services like Microsoft Azure and Amazon Web Services are well known and frequently highlighted. And with good reason – public cloud offers extremely reliable, highly available and easily scalable architecture for IT infrastructure deployments making it an attractive option for lift-and-shift projects. While allowing businesses to leverage fully managed hardware on which to extend their local network, something that is discussed far less often are the implications imposed by the abstracted network layer such services utilise.

A key component of any such migration is the implementation of line of business applications, requiring close coordination with third party vendor support. In an ideal world, software manufacturers would have versions of their software specifically catered for cloud services. In reality, this is often not the case, especially when legacy applications are concerned; and due to the impracticality of updating software for a single client’s purposes, the task of resolving implementation hurdles falls to cloud partners.

In my role of Professional Services at Evolve IT, I recently came across a good example of one such problem during a migration to Azure, caused by the limitations of one of our client’s key applications. The software used an FTP client to transfer data to a third party required for external processing. The issue being that active FTP was is necessary for the connection. As the virtual machine in question was only able to access the internet via it’s NATed public IP – an address that the client had no visibility of – after authentication the connection would fail.

Engaging vendor support confirmed that passive FTP connections were not supported and that a workaround would need to be found to ensure the program could function as required. In this situation on-premise, or on private cloud, an Application Level Gateway (ALG) would be utilised to re-write the FTP packets outbound, updating the PORT command issued with the public IP address of the instance. After some quick research, it seemed that such a service was not available on Azure, and discussing the problem with a number of engineers at Microsoft reinforced this suspicion.

With options running out, I turned to a popular FreeBSD based router and firewall solution, pfSense – a distribution I was familiar with and had used to solve such problems with on-site infrastructure in the past. Some quick Googling suggested that pfSense was supported on Azure, and even available pre-configured in the Marketplace – things were looking up! At least momentarily that is, as due to licensing limitations the operating system wasn’t available under CSP licensing and would require a secondary, pay-as-you-go, subscription. For those not familiar with the intricacies of Azure, resources under different subscriptions are deployed in isolation to one another, meaning that an additional level of configuration would be required, further complicating matters. This, along with the administrative and support implications made the solution less than ideal.

Considering pfSense was off the cards, and I was back to where I started, concerns were being raised as to the viability of the deployment as a whole. From the client’s perspective, if that application couldn’t be supported on Azure then the project wasn’t going to work – an understandable position.

Not ready to admit defeat, I took a step back and reanalysed the situation. While pfSense wasn’t suitable, it wasn’t the OS that was key to solving the problem – but rather the FTP plug-in it offered. What if I could implement that key component by itself? Further researched suggested this just might be possible after all using a piece of software from the SuSE Proxy Suite known simply as ftp-proxy.

After deploying a basic Debian VM and performing some quick configuration, it was time to put my idea to the test. The result? Success! The proxy functioned as required and the active FTP connection completed without issues. All that was left was to wrap things up and hand the solution over for client testing. After positive feedback, all parties involved were comfortable that the problem was resolved, and the new public cloud infrastructure was ready for production.

And so, as you can see from the process, I’ve outlined above, while public cloud offers answers to a number of limitations of on-premise deployments it can also introduce its own implications. With that being said, these can often be overcome with careful consideration and a willingness to think outside the box, leading to benefits for all parties involved and an end product that gets the job done.

One-on-one Cloud Strategy Workshop

Posted by Matthew Billiet

Subscribe to our blog