Tutorial: Deploying an API on EC2 from AWS
Curator's Note: This article was co-authored by Andrzej Jarzyna.
At 3scale we find Amazon to be a fantastic platform for running APIs due to the complete control you have on the application stack. For people new to AWS the learning curve is quite steep. So we put together our best practices into this short tutorial. Besides Amazon EC2 we will use the Ruby Grape gem to create the API interface and an Nginx proxy to handle access control. Best of all everything in this tutorial is completely FREE!
For the purpose of this tutorial you will need a running API based on Ruby and Thin server. If you don’t have one you can simply clone an example repo as described below (in the “Deploying the Application” section). If you are interested in the background of this example (Sentiment API), you can see a couple of previous guides which 3scale has published. Here we use version_1 of the API(‘API up and running in 10 minutes‘) with some extra sentiment analysis functionality (this part is covered in the second tutorial of the Sentiment API tutorial).
Now we will start the creation and configuration of the Amazon EC2 instance. If you already have an EC2 instance (micro or not), you can jump to the next step -> Preparing Instance for Deployment.
Creating and configuring EC2 Instance
Let’s start by signing up for the Amazon Elastic Compute Cloud (Amazon EC2). For our needs the free tier http://aws.amazon.com/free/ is enough, covering all the basic needs. Once the account is created go to the EC2 dashboard under your AWS Management Console and click on the Launch Instance button. That will transfer you to a popup window where you will continue the process:
- Choose the classic wizard
- Choose an AMI (Ubuntu Server 12.04.1 LTS 32bit, T1micro instance) leaving all the other settings for Instance Details as default
- Create a keypair and download it – this will be the key which you will use to make an ssh connection to the server, it’s VERY IMPORTANT!
- Add inbound rules for the firewall with source always 0.0.0.0/0 (HTTP, HTTPS, ALL ICMP, TCP port 3000 used by the Ruby thin server)
Preparing Instance for Deployment
Now, as we have the instance created and running, we can directly connect there from our console (Windows users from PuTTY). Right click on your instance, connect and choose Connect with a standalone SSH Client.
Follow the steps and change the username to ubuntu (instead of root) in the given example.
After executing this step you are connected to your instance. We will have to install new packages. Some of them require root credentials, so you will have to set a new root password: sudo passwd root
. Then login as root: su root
.
Now with root credentials execute:
sudo apt-get update
and switch back to your normal user with exit
command and install all the required packages:
- install some libraries which will be required by rvm, ruby and git:
sudo apt-get install build-essential git zlib1g-dev libssl-dev libreadline-gplv2-dev imagemagick libxml2-dev libxslt1-dev openssl libreadline6 libreadline6-dev zlib1g libyaml-dev libxslt-dev autoconf libc6-dev ncurses-dev automake libtool bison libpq-dev libpq5 libeditline-dev
- install git (on Linux rather than from Source): http://www.git-scm.com/book/en/Getting-Started-Installing-Git
- install rvm: https://rvm.io/rvm/install/
- install ruby
rvm install 1.9.3
rvm use 1.9.3 --default
Deploying the Application
Our sample Sentiment API is located on Github. Try cloning the repository:
git clone git@github.com:jerzyn/api-demo.git
you can once again review the code and tutorial on creating and deploying this app here:
http://www.3scale.net/2012/06/the-10-minute-api-up-running-3scale-grape-heroku-api-10-minutes/
and here
http://www.3scale.net/2012/07/how-to-out-of-the-box-api-analytics/
note the changes (we are using only v1, as authentication will go through the proxy).
Now you can deploy the app by issuing: bundle install
.
Now you can start the thin server: thin start
.
To access the API directly (i.e. without any security or access control) access: your-public-dns:3000/v1/words/awesome.json
(you can find your-public-dns in the AWS EC2 Dashboard->Instances in the details window of your instance)
For the Nginx integration you will have to create an elastic IP address. Inside the AWS EC2 dashboard create an elastic IP in the same region as your instance and associate that IP to it (you won’t have to pay anything for the elastic IP as long as it is associated with your instance in the same region).
OPTIONAL: If you want to assign a custom domain to your amazon instance you will have to do one thing:
- add an A record to the DNS record of your domain mapping the domain to the elastic IP address you have previously created. Your domain provider should either give you some way to set the A record (the IPv4 address), or it will give you a way to edit the nameservers of your domain. If they do not allow you to set the A record directly, find a DNS management service, register your domain as a zone there and the service will give you the nameservers to enter in the admin panel of your domain provider. You can then add the A record for the domain. Some possible DNS management services include ZoneEdit (basic, free), Amazon route 53, etc.
At this point you API is open to the world. This is good and bad – great that you are sharing, but bad in the sense that without rate limits a few apps could kill the resources of your server, and you have no insight into who is using your API and how it is being used. The solution is to add some management for your API…
Enabling API Management with 3scale
Rather than reinvent the wheel and implement rate limits, access controls and analytics from scratch we will leverage the handy 3scale API Management service. Get your free 3scale account, activate and log-in to the new instance through the provided links. The first time you log-in you can choose the option for some sample data to be created, so you will have some API keys to use later. Next you would probably like to go through the tour to get a glimpse on the system functionality (optional) and then start with the implementation.
To get some instant results we will start with the sandbox proxy which can be used while in development. Then we will also configure an Nginx proxy which can scale up for full production deployments. There is some documentation on the configuration of the API proxy at 3scale:
https://support.3scale.net/howtos/api-configuration/nginx-proxy
and for more advanced configuration options here:
https://support.3scale.net/howtos/api-configuration/nginx-proxy-advanced
Once you sign into your 3scale account, Launch your API on the main Dashboard screen or Go to API->Select the service (API)->Integration in the sidebar->Proxy
Set the address of of your API backend – this has to be the Elastic IP address unless the custom domain has been set, including http protocol and port 3000. Now you can save and turn on the sandbox proxy to test your API by hitting the sandbox endpoint (after creating some app credentials in 3scale): http://sandbox-endpoint/v1/words/awesome.json?app_id=APP_ID&app_key=APP_KEY
where, APP_ID and APP_KEY are id and key of one of the sample applications which you created when you first logged into your 3scale account (if you missed that step just create a developer account and an application within that account).
Try it without app credentials, next with incorrect credentials, and then once authenticated within and over any rate limits that you have defined. Only once it is working to your satisfaction do you need to download the config files for Nginx.
Note: any time you have errors check whether you can access the API directly: your-public-dns:3000/v1/words/awesome.json. If that is not available, then you need to check if the AWS instance is running and if the Thin Server is running on the instance.
Implement an Nginx Proxy for Access Control
In order to streamline this step we recommend that you install the fantastic OpenResty web application that is basically a bundle of the standard Nginx core with almost all the necessary 3rd party Nginx modules built-in.
Install dependencies: sudo apt-get install libreadline-dev libncurses5-dev libpcre3-dev perl
Compile and install Nginx:
cd ~
sudo wget http://agentzh.org/misc/nginx/ngx_openresty-1.2.3.8.tar.gz
sudo tar -zxvf ngx_openresty-1.2.3.8.tar.gz
cd ngx_openresty-1.2.3.8/
./configure --prefix=/opt/openresty --with-luajit --with-http_iconv_module -j2
make
sudo make install
In the config file make the following changes:
- edit the .conf file from nginx download
- in line 28, which is preceded by info to change your server name put the correct domain (of your Elastic IP or custom domain name)
- in line 78 change the path to the .lua file, downloaded together with the .conf file.
We are almost finished! Our last step is to start the NGINX proxy and put some traffic through it. If it is not running yet (remember, that thin server has to be started first), please go to your EC2 instance terminal (the one you were connecting through ssh before) and start it now:
sudo /opt/openresty/nginx/sbin/nginx -p /opt/openresty/nginx/ -c /opt/openresty/nginx/conf/YOUR-CONFIG-FILE.conf
The last step will be verifying that the traffic goes through with a proper authorization. To do that, access:
http://your-public-dns/v1/words/awesome.json?app_id=APP_ID&app_key=APP_KEY
where, APP_ID and APP_KEY are key and id of the application you want to access through the API call. Once everything is confirmed as working correctly, you will want to block public access to the API backend on port 3000, which bypasses any access controls.
If encounter some problems with the Nginx configuration or need a more detailed guide, I encourage you to check the 3scale guide on configuring Nginx proxy: https://support.3scale.net/howtos/api-configuration/nginx-proxy. You can go completely wild with customization of your API gateway. If you want to dive more into the 3scale system configuration (like usage and monitoring of your API traffic) feel encouraged to browse our Quickstart guides and HowTo’s.