Hello REST – Building the API

Following on from Designing a RESTful API, this post will be a guide on building the API, itself.

The application will be used as benchmark for future posts, most of my decisions are simply because this setup is in my comfort zone.

The main tools I’ll be using are,

  • Vagrant manages the virtual machine
  • Scotchbox is a one-size-fits all LAMP stack image for Vagrant
  • Laravel is a framework that is great at building RESTful APIs
  • Composer is a package manager for PHP projects

Setting up the Virtual Machine

First we will clone Scotchbox Pro. Scotchbox Pro is a one-size-fits-all image which I like to use when I want to get straight into the code.

The non-pro does not have PHP 7.2, but alternatives are easy to find.

Run these commands to download the Vagrantfile from github,

git clone https://github.com/scotch-io/scotch-box-pro ctt-helloworld
cd ctt-helloworld

Then spin the virtual machine up (The first time you do this may take a while to download the image).

vagrant up
vagrant ssh

After calling vagrant ssh, you will be in the shell of the guest machine. The guest machine’s “/var/www” directory now points to the directory where “VagrantFile” is.

All commands will be run from inside the guest’s /var/www directory.

This is a good practice to ensure there are no host-machine dependencies; it also will be easier to explain things.

Setting up the project

Laravel will be the framework we are using. You could argue that Lumen would be more efficient fit for what we are building, but Laravel will offer more flexibility later.

This command will bootstrap a Laravel 5.5 (Current LTS) project in a “tmp” directory, then move the contents of that directory back into our home. (It will not install to an empty directory but my IDE has already added files…)

composer create-project --no-install --prefer-dist laravel/laravel:5.5.* tmp
cp -r /var/www/tmp/{.,} /var/www
rm -rf tmp

Extended Generators will generate migration scripts from the command-line.

composer require laracasts/generators --dev

Eloquent Model Generator will generate models a bit better than Extended Generators

composer require krlove/eloquent-model-generator --dev

Let Laravel know about the package providers (but only locally), update App\Providers\AppServiceProvider::register()

    public function register()
    {
        // Service providers only required by development
        if ($this->app->environment() == 'local') {
            $this->app->register('Laracasts\Generators\GeneratorsServiceProvider');
            $this->app->register('Krlove\EloquentModelGenerator\Provider\GeneratorServiceProvider');
        }
    }

To finishing setting up the project, copy .env.example file, and update the database password to match the database provided by Scotchbox,

cp .env.example .env
sed -i -e 's/DB_USERNAME=homestead/DB_USERNAME=root/g' .env
sed -i -e 's/DB_PASSWORD=secret/DB_PASSWORD=root/g' .env
sed -i -e 's/DB_DATABASE=homestead/DB_DATABASE=music/g' .env

Setting up the database

Referencing the MVP data-structure from Designing a RESTful API, these commands build the migration scripts and models using Laracast’s generators…

albums, artist, and tracks are three simple meta tables for the time-being, “HasTimestamps” trait is generated, but not “SoftDeletes”, so I define the deleted_at manually.

php artisan make:migration:schema create_albums \
--schema="title:string, deleted_at:date:nullable"
php artisan make:migration:schema create_artists \
--schema="title:string, deleted_at:date:nullable"
php artisan make:migration:schema create_tracks \
--schema="title:string, deleted_at:date:nullable"

And finally a couple of pivot tables to define the relationships

php artisan make:migration:pivot tracks artists
php artisan make:migration:pivot albums track

The migration script will not create a schema for us, so let’s do that first

mysql -uroot -p -Bse "create schema music;"

Running the migration script will now create our database

php artisan migrate

Completing the models

Discard the models created by Laracast’s Generators, the package fall shorts on a few things that the Eloquent Model Generator solves.

rm -rf app/*.php

The Eloquent Model Generator to builds models based on the database structure itself.

php artisan krlove:generate:model Artist \
--namespace=App\\Models \
--output-path=Models

php artisan krlove:generate:model Album \
--namespace=App\\Models \
--output-path=Models

php artisan krlove:generate:model Track \
--namespace=App\\Models \
--output-path=Models

Logic and serialisation

Firstly create the resources, these will be used to shape our responses.

php artisan make:resource Album
php artisan make:resource Track
php artisan make:resource Artist
php artisan make:resource Albums --collection
php artisan make:resource Tracks --collection
php artisan make:resource Artists --collection

Creating resources controllers will save some effort when routing, but the controllers themselves will have a lot of dead code until we expand on the resources.

php artisan make:controller AlbumController --resource --model=Models\\Album
php artisan make:controller ArtistController --resource --model=Models\\Artist
php artisan make:controller TrackController --resource --model=Models\\Track

Update all controllers with this “index”, replacing the Resources and Album classes respectively.

public function index()
{
    return new AlbumResources(AlbumModel::all());
}


The album controller needs two extra methods to handle the extra routes. Notice how “Album” into both methods, this is because of the static route binding we are about to do.

public function show(AlbumModel $album)
{
    return new AlbumResource($album);
}

public function artists(Album $album) {
    return new ArtistsResource($album->artists);
}

Read API Resources and Retrieving models in the Laravel Documentation for more information.

Setting up routing and controllers

Routing will serve to direct HTTP requests to the controllers which we created in the previous section.

Begin by defining some static bindings for the routes. This means we don’t need to handle whether the objects are available further down the line (It will handle 404s etc). Add the following lines to App\Providers\RouteServiceProvider::boot()

Route::model('album', 'App\Models\Album');
Route::model('artist', 'App\Models\Artist');
Route::model('track', 'App\Models\Track');

Define the routes in routes/web.php. Most routes can be handled with Route::resource(), but the specification defines additional ‘helper routes’, which logically map to other resources with an additional filter.

Route::group([ 'prefix' => 'v1' ], function () {
    // Album resources (our vertical prototype)
    Route::resource('albums', 'AlbumController')->only(['index', 'show']);
    Route::resource('albums/{album}/tracks', 'AlbumController::tracks');

    // Other resources
    Route::resource('artists', 'ArtistController')->only(['index']);
    Route::resource('tracks', 'TrackController')->only(['index']);
});

Read HTTP Routing in the Laravel Documentation for more information.

Populating the database with dummy data

At this stage, the RESTful API is almost ready, but we don’t have any data to respond with.

I eventually plan in populating the dataset with real-data, but for now I’d be happy with a dummy dataset.

To do this, I use a factory to create “fake” data, then seeder to populate the fake data into the database.

To bootstrap our factories, run the following commands

php artisan make:factory AlbumFactory --model=Models\\Album
php artisan make:factory ArtistFactory --model=Models\\Artist
php artisan make:factory TrackFactory --model=Models\\Track

Then we need to manually update each factory in database/factories to shape our data. “Faker” is used to randomly generate values. Our data doesn’t have much shape yet, so each factory should simple define a title…

$factory->define(App\Models\Track::class, function (Faker $faker) {
    return [
        'title' => $faker->name
    ];
});

Then go to database/seeds/DatabaseSeeder.php and replace the run method with a quick and dirty seeder. I attempt to create as many realistic ‘shapes’ of data as possible, whilst keeping it consise…

public function run()
{
    DB::statement('SET FOREIGN_KEY_CHECKS=0;');

    // Clear existing data
    DB::table('artist_track')->truncate();
    DB::table('album_track')->truncate();
    \App\Models\Album::truncate();
    \App\Models\Artist::truncate();
    \App\Models\Track::truncate();

    // Create 50 artists
    $poolOfArtists = factory(\App\Models\Artist::class, 50)->create();

    // Create 100 albums, each 10 tracks, each with 1 random artist
    factory(\App\Models\Album::class, 100)->create()->each(static function(\App\Models\Album $album) use ($poolOfArtists) {
        $tracks = factory(\App\Models\Track::class, 10)->create()->each(function(\App\Models\Track $track) use ($poolOfArtists) {
            $track->artists()->attach($poolOfArtists->random());
        });

        $album->tracks()->attach($tracks);
    });

    // 50 tracks will be given a second artist
    \App\Models\Track::all()->random(50)->each(function(\App\Models\Track $track) use ($poolOfArtists) {
       $track->artists()->attach(
           $poolOfArtists->whereNotIn( 'id', $track->artists()->pluck('id') )->random()
       );
    });

    DB::statement('SET FOREIGN_KEY_CHECKS=1;');
}

To run the seeder, run this command:

php artisan seed:run

Read Database Testing in the Laravel Documentation for more information.

Querying the API

The app now respects the OpenAPI specification that we designed in the first post. Let’s call this the MVP.

Take a look for yourslef, here are some curl requests to try…

curl -X GET http://192.168.33.10/v1/artists
curl -X GET http://192.168.33.10/v1/tracks
curl -X GET http://192.168.33.10/v1/albums
curl -X GET http://192.168.33.10/v1/albums/1
curl -X GET http://192.168.33.10/v1/albums/1/tracks

And some 404s for good luck

curl -X GET http://192.168.33.10/v1/albums/999
curl -X GET http://192.168.33.10/v1/albums/999/artists
curl -X GET http://192.168.33.10/v1/albums/999/tracks

This post was a lot bigger than I expected it to be. In the next post in the series I’ll look at testing the application using the OpenAPI schema created in the previous post.

Hello REST – Designing a RESTful API

Whilst trying to solve problems, I often need to use others projects as a benchmark. But how do I represent this when writing a blog post? My answer is to write a “Hello world” RESTful API that will allow me to break everything down into chunks.

In a series of blog posts I will be:

  • Designing RESTful API
  • Building an API with dummy data
  • Deploying the API

I will continue to cover new technologies and opinions using this dummy API a benchmark.

I won’t be getting too far into my decisions, fundamentally I will be building an abstract interface using the tools that I am familiar with. The purpose of my blog is to later challenge these ideas.

What’s the plan?

The objective of this post is to have an understanding of how the API will work.

Short term, I’ll be creating a theoretical minimum viable product of a RESTful API. It will be purposely (slightly) flawed, because flaws are interesting.

I’ll start with something everybody is familiar with – albums, artists and tracks. I’ll use albums to design a vertical prototype to save some time and set a benchmark of how the API will be structured further down the line.

Long term, I’d like to eventually reverse engineer something like Spotify, because it’s something I’m familiar with, relatable, and I’m sure building something along those lines will allow me to go over a lot of fun topics

Technical decision: Using OpenAPI

I’m going to use an OpenAPI specification, my reasoning:

  • It will allow me to draft some abstract concepts on-the-fly
  • OpenAPI itself is well documented (less explaining to do, for now)
  • No need to build a client for every language I play with
  • Generating tests based on this schema is TDD (plus more avenues of future content)

This is a contract-first approach at building the application. The specification I design will be used as a contract which I will later fulfil when I build the API, itself.

The Specification (First Draft)

It has taken me about an hour to to write and validate using this Swagger Validator.

swagger: "2.0"
info:
  title: Hello REST
  description: A friendly RESTful API
  version: 1.0.0

host: 192.168.33.10
basePath: /v1
schemes:
  - https
  - http

paths:
  /albums:
    get:
      summary: Search for albums
      produces:
        - application/json
      responses:
        200:
          description: Here is the list of albums you requested
          schema:
            type: object
            required:
             - data
            properties:
              data:
                type: array
                items:
                  $ref: '#/definitions/track'


  "/albums/{albumId}":
    get:
      summary: Return an album
      parameters:
        - in: path
          name: albumId
          type: string
          required: true
          description: Unique ID of album
      produces:
        - application/json
      responses:
        200:
          description: Here is the album you requested
          schema:
            $ref: '#/definitions/album'
        404:
          description: Album does not exist


  "/albums/{albumId}/tracks":
    get:
      summary: Return tracks belonging to the album
      parameters:
        - in: path
          name: albumId
          type: string
          required: true
          description: Unique ID of album
      produces:
        - application/json
      responses:
        200:
          description: Tracks belonging to album
          schema:
            type: object
            required:
             - data
            properties:
              data:
                type: array
                items:
                  $ref: '#/definitions/track'
        404:
          description: Album does not exist


  /artists:
    get:
      summary: Returns a list of albums
      produces:
        - application/json
      responses:
        200:
          schema:
            type: object
            required:
             - data
            properties:
              data:
                type: array
                items:
                  $ref: '#/definitions/album'
                  
  /tracks:
    get:
      summary: Search for tracks
      produces:
        - application/json
      responses:
        200:
          description: Here is the list of tracks you requested
          schema:
            type: object
            required:
             - data
            properties:
              data:
                type: array
                items:
                  $ref: '#/definitions/track'

definitions:
  album:
    type: object
    required:
      - id
      - title
    properties:
      id:
        type: string
      title:
        type: string


  artist:
    type: object
    required:
      - id
      - title
    properties:
      id:
        type: string
      title:
        type: string


  track:
    type: object
    required:
      - id
      - title
    properties:
      id:
        type: string
      title:
        type: string    
 

Now that we have an idea of what we want to achieve, the next step is to build the API (it doesn’t take long)

My thoughts on testing

  • Use the language of the acceptance criteria (promise) to write a test
  • Write a test for every promise you need to keep
  • Writing tests teaches good language practices
  • When inheriting a project, create a basic test suite as a baseline
  • When migrating a project, migrate one thing at a time and use tests for integrity

If you ask my co-workers, testing is very close to my heart. As soon as I started writing tests, the crippling anxiety I faced when deploying to production disappeared over night.

This is my first blog post and I haven’t figured out a format yet, so I’m just going to dump all of my thoughts on testing here in one go and maybe expand on some things in future posts.

Promises

Tests are abstract promises. “I promise to always (do this)”. In life it’s easy to forget your promises, especially after a long period of time. Tests are there to make sure you keep your promise.

You also don’t want to keep a promise that you can’t keep. I see the purpose of grooming, and creation of promises (acceptance criteria) as the opportunity to ensure that the product owner and coder understand the contract they are about to commit to.

Promises are also baggage. It is in everybody’s interest that the baggage (code creep) is kept to a minimum, so grooming is another opportunity to simplify the contract.

Controversially, I think most unit tests are baggage, certainly on the projects I work on. It’s rare that a product team communicates that they need an exception to be thrown, but the unit tests will allow us to support the finer details of the requirements, let me explain…

The language of tests

The first learning curve somebody faces when starting off with tests, is the type of test that they need to write.

My favourite explanation is: It depends on the language used to build the requirements

TestLanguageCriteria / Test nameTest steps
Acceptance TestsThe language of the userI can signup for a trial account
I click “sign up” button
I fill in my email
I fill in my password
I click “Register”
I wait for message “Registration complete”
I am on page “My account”
API Tests (Functional)The language of an API (eg. RESTful)Can list accountsI send GET to “/api/v1/accounts”
I see response is successful
I see a list of accounts
Unit TestsThe language of codeHandles empty response from ApiNameI expect Validation Exception when I call
(new User())->save()

Which type of test should I write?

In short, having as many angles to break your code is important.

I often refer back to the testing pyramid when I’m trying to explain why all types of test are important. It’s a great article, I recommend a read.

My extended interpretation of the testing pyramid is how tests can support other tests, which can make them simpler.

For example, given the acceptance test “I can see validation message when I enter bad things”, we can assume that when a certain thing happens (for example a certain exception is thrown), the user will see a validation message…

Given this, we can write lots of shorter/faster unit test to ensure each scenario produces the excepted message (which we know the user will see based on our assumption).

Now, the assumption is a risk. The only “true” way of knowing each validation error will be visible to the user is to write an acceptance test for each one. But the value of each test beyond the first has diminishing returns but costs a lot more.

How long should I spend on tests?

Tests are unfortunately optional for most products. They don’t necessarily get the job done so the Business-logic allows us to ignore them.

This is where TDD comes in: Write tests to test your code. You are testing your code anyway (aren’t you?). This way, “testing” doesn’t take time, it is the time.

TDD is a great practice in theory, but in practice I don’t think it lends itself to flexibility because in the real-world requirements can be easily misunderstood (or even changed) by the Business mid-sprint, and it would create a lot of overhead

My pseudo-TDD is to write tests as I code. This has evolved from when I use to create a test controller to play with classes in, or use Postman to check my API works as-expected. It takes the same amount of time, provides a decent level of flexibility, and when I am finished, my tests are complete!

When I’m in the situation when I simply don’t have time to write tests (being rushed), I will just do the work, and create a skipped test. This way when the suite is run I can see a todo list of promises I’m not tightly enforcing (Not the end of the world)

Tests improve code

I use the tests to (1) test the code and (2) ensure it is nice to work with

To write a test is to dog-feed yourself your own code and API before you show it to the world. You are a chef tasting your food before you send it out, and when the food has been eaten you can’t make any more changes.

Dependency injection, small methods, early returns, meaningful code, meaningful exceptions, separation of concerns, and other good practices – All things that I really started to understand/appreciate after writing tests.

Inheriting projects which don’t have tests

I have a habit of inheriting code-bases that have no tests. This is a scary time for me, because I’m suddenly tasked with keeping promises that other people have made, but are no longer responsible for keeping.

When first tasked with this scenario was to analyse the code, and write a test for each objective thing I could see. It was a RESTful API so my tests were along the lines of “Lists Resources”, “Can Filter By Parameter”, “Fails when Query Parameter Is Not Provided”

I learnt a lot from the project doing it this way, but inferred invalid promises because I over-thought everything the code intended to do (Meaningful comments and git histories were pretty lacking, and the coders had since left)

Over-thinking led me to championing understanding/promises that was never a promise in the first place. In grooming/planning I felt like I was arguing for a ‘previous intention’ to work in a certain way, even though the products team didn’t ‘agree’ to this way. However, my tests ‘locked in’ the behaviour, making future modification difficult.

Example: “Fails when Query Parameter is not provided” is not RESTful. However I written a test that locks in what was probably a defect. Because this API had old phone clients that might have relied on this behaviour, I felt the need to keep the promise although I had no idea whether it was a promise in the first place!

What I’ve learnt: Champion as few promises as you are willing to keep when inheriting a test-less project. The definition of this will differ on a project-by-project basis, so maybe talk about how much time you are given to ‘understand’ what you have inherited

Migrating projects

Let’s start with saying it is easier to migrate one thing at a time, so let’s look at some types of migrations and see what tests support they work:

Migrating…Useful tests
FrontendAcceptance
Backend (APIs)API
Backend (Not APIs)Unit
Functional (Non-API)
FrameworkAcceptance
API
DatasetsAcceptance
API
Refactoring code (BAU)Acceptance
Functional
Unit
ViewsAcceptance

Often, migration will not be so simple, but when changing everything, having a “point of reference” is critical to reduce risk.

Testing red flags

When I see this in a test, I often flag it as something needs to be done, which is very situational.

  • Tests fail randomly (Test isn’t running from a fixed state)
  • When test fails, it is not clear exactly what has broken based on the name (Test does too much or test name needs tweaking)
  • When test fails, it is not easy to replicate/understand why (Test is trying to be too clever or something needs simplifying)
  • Seemingly unrelated part of the code breaks in test (Seperation of concerns or god class)
  • Single test is slow (Something specific needs optimising or test does too much)
  • Suite is taking too long to run (App is getting too big, break it down, or there too many tests)
  • Tests break too easily (Either: test is not specific enough,
  • Lots of skipped tests (Find the time to unskip them)

It’s just a list of things at the top of my head for now, I’ll probably expand on things in the future.