I’m excited to launch our new website this week! It was a lot more work than I initially expected because the scope changed dramatically from what I initially set out to do. At first, this was just going to be a simple re-branding from the previous business name. Then I decided to rewrite the whole site entirely, along with designing a new look for it. Then came the challenge of incorporating what had been a separate blog. In the end, I went with what I feel is a very secure setup that I’d like to share with others looking to accomplish something similar.
I moved everything over from an old PHP main site on one server and a separate Ghost CMS blog on another server into one cohesive site using a combination of Static Site Generators (SSG) and Serverless Architecture. My site’s attack surface is much lower, the speed is much quicker, and the cost is much lower - WIN!
The problem with traditional website hosting
What could be worse than a security company’s website getting hacked? We’re busy trying to tighten other company’s security postures, so we’d rather not spend time patching, monitoring, and updating our own servers. For that matter, I’m sure many other companies would prefer to focus on their core vision than worry about web server security.
For most, the decisions lie primarily between a few main options:
- Managed CMS (WP Engine, SiteGround, Bluehost, etc)
- Self-hosted CMS (Wordpress, Ghost, Drupal, etc)
- Manged Website Builders (Squarespace, Wix, Medium, etc)
- Self-hosted custom sites (PHP, .Net, etc)
There are many reasons why someone would want to go with one of these services, but there are also some potentially significant downsides. You can find the pro’s of these services directly from the provider’s marketing pages, so I’ll focus here on why these options weren’t good for Fracture Labs. Many of these issues could be mitigated with the right design upfront and proper on-going care, but again, the point here is to eliminate as many attack paths as possible so we don’t have to even worry about what could happen.
Issues with most website options
- Cost - whether you’re paying for managed hosting, or if you are hosting it yourself and need to spend time and money maintaining it, the costs can add up quickly.
- Control - there are plug-ins to handle almost any need, but you’re still limited by what the software allows you. You might have more-specific needs that just can’t be met by a rigid CMS.
- Administration - These systems are administered directly on the web server, which means that anyone that can gain access to the system can completely take it over to steal data, server malware, or attack other servers as part of a botnet.
- Security Maintenance - Making sure your servers are routinely and timely patched can take a lot of effort. The same goes for ensuring only secure code makes it through publishing. And dealing with third party plug-ins? How often do most companies go back and check if their old plug-ins have any known vulnerabilities?
Goals for a low-maintenance, secure website
Well, I guess the goals are pretty much the inverse of the issues I just mentioned!
- Low cost, but fast
- High-degree of control
- Easy to administer, but difficult for an attacker to take control
- As close to set-it-and-forget-it as possible
I now have a fully server-less site that provides all of the functionality I’m looking for at blazing fast speeds, at low cost, and with a minimal attack surface! Boom!
Static Site Generators (SSG)
It’s great to have a dynamic website, especially for blogging. But the nature of dynamic websites provides a large attack surface for attackers. Whether the risks are introduced by poor coding, out of date plug-ins, or brute-force attacks against authentication modules, the dynamic capability introduces more risks than necessary if you’re able to take advantage of SSG’s.
In my case, I had to ramp up on some of the newfangled DevOps tools, but it was well worth it. I’m now building the static site using a combination of Gulp for my SCSS and JS files, and Hugo for my templating and dynamic-to-static conversion. A big shout-out goes to @CaseyCammilleri from Sprocket Security for teaching me about the SSG concept during one of our chats.
So this all means I get to write my site content with Markdown syntax in Sublime, while Gulp and Hugo watch for changes, compile in real-time, and provide a smoking fast local development environment for me to test against before publishing.
The gulp script command is simply
gulp to build based upon my
The Hugo script I use for development (might have some redundant calls in it, but this is what I landed on) is also pretty easy:
rm -rf public/* && hugo && hugo server -Dwv --renderToDisk
What’s great is that my last blog was written in Markdown as well and self-hosted in Ghost. All of the articles moved over flawlessly to this new system, so I didn’t lose anything or have to re-write it all!
So at this point, I’ve eliminated any server-side code execution while maintaining the functionality of a dynamic site!
Ok, let me start by saying I’m not a big fan of the term ‘serverless computing’. I get what the intention is, but obviously there are still servers doing all of the dirty work behind the scenes. I guess it just comes down to: I don’t need to know about nor manage any of the servers that are hosting my site. You could say the same about any managed solution really.
Anyways, now that I had a static site, I figured why maintain a server just for that? I was already running the site on an AWS EC2 server, so I decided to leverage AWS S3 instead. Deploying to AWS is incredibly quick and easy:
rm -rf public/* && hugo && aws s3 sync <src folder> s3://<bucketname> --acl public-read
What about Contact Forms?
Ok, so this is usually the big hang-up with fully-static sites. How can you incorporate a contact form on a static site? Sure, you could use one of the many “Contact Us” form providers (FormKeep, FormSpree, etc), but then you’re trusting them with your customer’s potentially-sensitive requests and might have to pay for it as well.
This is where Function as a Service (FaaS) comes into play. In my case, I continued down the AWS route and chose to implement my contact form with an API Gateway endpoint, a Lambda back-end, and Simple Email Service email sending platform. AWS has some decent security features built-in regarding rate limiting, but I’ve added Captcha to the form as well.
How about SSL/TLS?
I believe you should configure every site to work over SSL/TLS, regardless of whether or not you are accepting sensitive data from visitors. Besides Google treating HTTPS urls better for search ranking, I also appreciate the additional level of assurance that your visitors can gain by verifying the site they are on is really yours.
In order to easily serve my site over HTTPS, I created a Cloudfront distribution for it using a certificate generated by Amazon. No more spending hundreds of dollars on a certificate or messing with the frequent renewals associated with Let’s Encrypt (still a great option for other sites - check them out if you haven’t already).
Where does this leave us?
Well, at this point, with a minimal amount of effort upfront, I now have a secured website running in a serverless architecture. I don’t need to worry about ensuring Apache/Nginx are patched immediately following a vulnerability report, or scanning source code for potential vulnerabilities. I only need to focus on ensuring I’m properly protecting my AWS account / access keys, and most importantly, content creation. Oh and since I just moved everything over last week, I haven’t seen a bill from AWS yet. From what I understand though, I expect my bill to be much lower. Not to mention, I know AWS will be able to handle the load when my next hot blog post goes viral!
Contact the author directly at @brkr19 if you have any questions or comments about this post!