Once we were able to get in (traffic, parking and registration were somewhat chaotic!) we could settle down to enjoy the various presentations on offer. Since several Bumblebee’s were there, our different interests took us to different streams. The benefit of that is you now hear about a wider variety of talks than just one person could give.
I listened to the keynote talk, some of the so-called ‘Hot Topic’ talks and a few other sessions related to optimisation and tool choice. Here are my takeaways from the Summit:
- AWS really have a full-featured set of tools available to teams focused on delivering reliable applications and solutions
- The delivery point could be cloud-based (but needn’t be)
- It is feasible to set up and operate a fully automated delivery pipeline that will also scale in capacity according to demand
- One must not view AWS as just a set of servers that are not on-premise
- If this is your view, you’re missing out on some major benefits of the AWS environment (such as the scalability, the robustness, the redundancy)
- Your view of the required architecture will not give you the best solution
- Your approach to optimising a solution must be different from what you would do with an application on a physical server
Because of the total environment that AWS offers, the end-user does not need to worry about things like duplicate servers, best use of the capacity of a physical machine, failover and so many other non-functional areas. This means that an end-user can really focus on their business,and making that the best it can be. This also means that the end-user can build on an architecture that provides the best outcome and services for the application and its users, rather than where and how the application is deployed on physical machines
Viewing a solution as access to a set of required services also means that scaling up and down is very easy, and optimising the delivery can be treated like a linear programming solution to a problem with multiple variables (or dimensions). The end result is that one pays just for the resources used, and one does not have to provision physical hardware that may not be fully utilised.
It is also possible to select one’s analysis tools based on the kind of output one needs as well as the rapidity and frequency of updates. Thus, if one needs a weekly view of a dataset, the tools used would be different from those used to provide a view that is always no more than 5 minutes old. Again, the optimisation process is more granular than just “how much memory, how much CPU, how much storage?”. The end result again is one pays just for what is used at any point.
What I hope is becoming clearer is that our approach to solution delivery can change because of the cloud and its toolset. More importantly, our approach should change so that our clients gain the most benefit from the cloud.
I think these are issues that we will be revisiting frequently in the near future.
Author: Bruce Logan – Principal Consultant Bumblebee Consulting