How (And Why) We’re Building An API

We’ve explained what Mongo and NoSQL is, and why we’re using it. Now it’s the turn of the actual data access and manipulation methods, something we’ve termed Nucleus.

Nucleus is part of a bigger plan which Alex and I have been looking at around using SOA ((Service Oriented Architecture)) principles for data storage at Lincoln, in short building a central repository for just about anything around events, locations, people and other such ‘core’ data. We’re attempting to force any viewing or manipulation of those data sets through central, defined, secured and controlled routes more commonly known as Application Programming Interfaces, or APIs.

In the past it would be common for there to be custom code sitting between services, responsible for moving data around. Often this code would talk directly to the underlying databases and provide little in the way of sanity checking, and following the ancient principle of “Garbage In, Garbage Out” it wouldn’t be unheard of for a service to fail and the data synchronisation script to duly fill an important database with error messages, stray code snippets and other such nonsense which wasn’t valid. The applications which then relied on this data would continue as though nothing was wrong, trying to read this data and then crashing in a huge ball of flames. Inevitably this led to administrators having to manually pick through a database to put everything back in its place.

Continue reading

To NoSQL or not to NoSQL?

As part of Total ReCal we’ve been taking a look at the so-called NoSQL approach to databases. I gave a quick overview of NoSQL and why we were looking at it in a previous blog post, so I’m going to skip all the gory details of what NoSQL actually is (and why we’re using it), and leap straight into the discussion on if it’s any good, if it’s ready for prime-time, and if it’s ready for the HE sector to actually use in production.

Is it any good?

In a word, yes. In slightly more words, yes, but only if you use it in the right place. NoSQL is excellent at providing fast, direct access to massive sets of unstructured data. By ‘fast’ I mean ‘thousandths of a second’, and by ‘massive’ I mean ‘billions of items’. On the other hand, if you’re after rock-solid data integrity and the ability to perform functions like JOIN queries then you’re out of luck and you should stick to an RDBMS. The two approaches aren’t competing, but offer complementary functionality. A corkscrew and a bottle opener both let you into your drink, but it’ll be amazingly awkward to open your beer with a corkscrew.

Continue reading

What We’ve Been Up To

It’s all been a bit quiet on the Total ReCal front for the past week or so, but not because we’ve been quietly doing nothing. Instead we’ve been quietly working on the supporting systems which let Total ReCal do it’s thing without needing to handle every single aspect of time/space management, user authentication and who knows what else.

The first thing we’ve got mostly complete is our new authentication system, built around the OAuth 2.0 specification (version 10). For those of you unfamiliar with OAuth, it’s a way of providing systems with authorisation to perform an action without actually giving them a user’s credentials, much as modern luxury cars come with a ‘valet key‘ which might provide a valet with limited driving range, limited top speed and no ability to open the boot. In the case of the University we’ve come up with a service whereby a user (in this case a student or staff member) issues authorisation for a service to access or modify data stored within the University on their behalf.

Taking Total ReCal the example, the user would issue a key which allows Total ReCal to read their timetable, assessments data and library data (from which it can extract various events such as lectures, hand-in dates and book due dates).What it doesn’t give is permission to read personal details, to book rooms under that person’s authority, to renew library books or indeed anything else which requires a specific permission. In addition to this, Total ReCal never sees the user’s authentication information – it simply doesn’t need to because the key it’s been given by the user is authority enough to do what it needs.

We need OAuth for a variety of reasons. First of all, we were getting bored of having to write a whole new authentication system for every single application, and this makes our lives much easier. Secondly and more relevantly we want Total ReCal to be a demonstration of the Service Oriented Architecture way, showing that it’s possible to make use of small, focussed services which we bolt together as we need rather than monolithic applications which do everything, but don’t play nicely with other monolithic applications trying to do everything. Authentication is a key example of this since it’s something in common to almost every application. Thirdly, we want to be able to explore more ways of giving the user control and this is one of them. By relying on the OAuth authorisation route, users are given crystal clear information on what Total ReCal is, what it does, and how it intends to use their information. It’s then up to the user whether they want to use Total ReCal or not, and they can revoke the permission at any time. In future we hope to see lots more applications take this route, not necessarily just from within the University but also from outside.

Continue reading

Why NoSQL?

After looking at the initial brief for Total ReCal, we realised that it would be necessary to build a new data storage layer to handle the time/space information which drives the project. There are many reasons for this both technical and political, but the key reason is that since we are running what is effectively an abstraction and amalgamation service we want to be able to interface directly with our own copy of the data; here’s why.

Speed is often considered a luxury when dealing with large data sets, and especially in larger institutions it’s common to think nothing of waiting a few minutes for a report to finish building or for your operation to finish processing, but we wanted to offer something where you could happily hit it with 20-30 queries a second over an API. This is particularly relevant given our larger Nucleus un-project to expose public (and some private) data over APIs to allow mashups. In short, we don’t want to have to wait for even half a second whilst another service gets the data we’re after, and we especially don’t want to have to waste more time parsing the data into a useful format.

We looked at several possibilities for how to store the data. An obvious one to take a look at is a traditional RDBMS ((Relational Database Management System)) such as PostgreSQL or MS-SQL. In this instance we would most likely have been using MySQL, since it fits smoothly into the almost universally supported LAMP ((Linux, Apache, MySQL, PHP)) stack which is available on our key development server. Alex and myself are both well-versed in using MySQL as a database and interfacing with it using PHP, so should we have opted for an RDBMS it would be the obvious choice despite the rest of the University standardising on MS-SQL.

Continue reading