A guide to embracing the Automation Revolution
Despite the market hype and doom-mongering hyperbole, automation is not going to bring about the apocalypse anytime soon. But it is going to change the way we work dramatically. As automation specialists, we see it as our duty to make sure the automation revolution brings about a new normal that will improve industry not destroy it. […]
Blue-Green Deployment with Route53 and CloudFormation by Yufang Zhang
Automation • AWS • Cloud • continuous delivery • Devops
Having an automated deployment process, or Continuous Delivery can hugely speed up the time to get a “done” software released. Blue-Green Deployment, as one of the techniques being used in it, can reduce downtime by having two identical production environments. Only one of the two environments have live traffic going in at any time. For […]
DevOps: Coming to an enterprise near you.
In an industry where hype is the norm the DevOps movement has been fairly low-key until quite recently. Well, low-key in the enterprise perhaps, but there is a massive and very passionate community that has been doing great things for some time. Things like deploying code to production, at scale, every 15 minutes or creating super computer scale systems for pharmaceutical research in the cloud and paid for with a personal credit card.
DevOps is now coming to an enterprise near you and will have a huge impact in 2015 and beyond, so get ready.
Compare DevOps with the production line; DevOps is the IT equivalent to help you understand the relevance.
The production line revolutionised industry and underpins our consumer world delivering a dazzling array of innovative products that are accessible to anyone.
While web and gaming companies pioneered this space it’s merits have been identified by global software companies, retailers, banks and even heavy industry.
Over five years we have observed many engineers content being blissfully ignorant of DevOps. Others have dismissed it but take note; this is changing and changing fast and now many enterprises have now nailed their sails to the DevOps mast.
DevOps is essential so it is inevitable; it will become ubiquitous in the enterprise and is fundamental to redress the impact that ITIL and out-sourcing has had on innovation and expertise.
How can we be so confident? DevOps shortens the software development lifecycle and reduces waste (time, process, repetition) and improves quality enabling you to focus on what is important to the business and innovate. Most fundamentally it is about automating all things including process, infrastructure, deployment, test, build and change. This has been something that many organisations have tried to tackle alone and have not succeeded.
So ask yourself:
1. Do you have the culture, capability and confidence to commence your DevOps journey?
2. Do you know where to start and what to avoid?
3. Do you understand what DevOps best practices look like and can you avoid anti-patterns?
DevOps is a C-cubed world. Culture, Capability and Confidence. You need all three to succeed.
Don’t fall into the common trap and assume this is simply about adopting the cloud or implementing tools – talk to someone who knows this space and has the hands-on experience to advise.
Seven Key Criteria on Which to Evaluate a PaaS Provider, and Two Red Herrings to Avoid…
In the first of two posts on the topic, Kris Saxton highlights the key criteria he sees in evaluating the plethora of new PaaS solutions that are coming to market, as well as offering a couple of “red herrings” to be careful to avoid…
There was a time just after the ipod was launched that you could gauge the level of general insanity on the Internet by the rate at which new ipod docking stations were launched. With a new one launched seemingly every week (Empire at the time of writing) PaaS projects are starting to feel like a good candidate yardstick for the hype around cloud, but how do you select the right platform for your needs?
PaaS Fit for Large Enterprise?
As with all of these nebulous computing trends (cloud, DevOps), I have to spend a bit of time defining what PaaS means to us at Automation Logic. As a consultancy focused on the large enterprise, our customers are typically blessed with at least one of the following: legacy, regulation, change control, complex internal processes, a terrible canteen. So the PaaS variant that I have in mind is the sort where you are provisioning more complex services on top of your existing IaaS rather than the black-box development ecosystem where all you need contribute is application code. In short; more Cloud Foundry than Heroku. A platform here means a complex, multi-node service or environment required to host an application, rather than just a hosted language runtime.
So, if you’re working for an Internet startup with a single product based on a cloud you didn’t build, lucky you, you can get back to your flat white. For the rest of us, read on.
Still here? Good. With this type of PaaS in mind, there’s no doubt that the potential to go beyond the infrastructure and manage entire services and environments has substantial value, but in such a fast moving, immature sector how do you ensure you aren’t backing a lemon?
Having implemented a few of these now, this article outlines what we think are a sensible set of criteria for large enterprises to adopt when evaluating a PaaS. It also challenges a few commonly pushed criteria which we think have little real merit (at least right now). In a follow-up post, we’ll overlay these requirements onto the current PaaS frontrunners and see how they measure up.
1. High Level Objects
In order to be able to provision and manage platforms, a PaaS needs to be able to describe and manage objects beyond simple compute, storage and networking – it has to look beyond the infrastructure. What these objects represent can be quite varied, they can be arrangements of servers (server groups, tiers) or provisioning activities (change control, monitoring enrolment); essentially they represent all the things you need to sort out to produce a fully-fledged service, environment or platform. Whilst the objects themselves are little more than named nodes, they act as pegs on which to hang more interesting stuff such as relationships, ordering, data and workflow (all described next). Consequently, a rich taxonomy for these higher level objects is a good sign of a flexible PaaS, the converse being something that is likely platform-specific (e.g. good at deploying Java onto AWS but not much else).
Even if you have some magical app where stateless nodes can join and leave as they please, chances are (in enterprise land) there will be elements either side of the main app which must also be automated as part of the platform provisioning process, and which have to happen in a certain order. Whether its declarative dependencies (nice) or procedural ordering (good enough), your PaaS needs to be able to support this capability.
3. Data Persistence and Discovery
Provisioning a service involves changing the environment in which you’re operating, IP addresses get consumed, things get named, new nodes come online. When you move beyond the single server and need to start provisioning platform components that relate to one another, you need a method of exchanging information between those components such that they can dynamically configure themselves. The simplest example is a two tier platform with a database and front-end component. The front-end needs to connect to the database but doesn’t know how until that database is provisioned (and given an IP address). In this scenario either the database component writes its IP address back to the persistence layer for the front-end to pick up later, or the front-end component is able to discover this information through a real-time network query. I’ll give practical examples of how various PaaS (and IaaS) implement this in my next post .
Generally you want to separate (or loosly couple) the modelling aspects of the PaaS which describe the platform components and their relationships from the code which actually performs the service provisioning. It’s a natural break and allows you to develop and maintain all the provisioning code (the bulk of which will be integrations with 3rd party services) independently. Tight coupling to the point where you can’t even really tell workflow from the data model or (worse) from internal PaaS code is a sign that you’re looking at a point-solution and not something that’s going to survive the new services you will need to integrate over the life of the PaaS. Ideally, the workflow component will be something that can run independently of the PaaS, that way you’ll be able to reuse it in your IaaS and for other IT process automation tasks.
5. Lifecycle management
A server is not always just for Christmas. PaaS projects can be guilty of wishful thinking, they assume they are dealing with stateless, transient servers that can come and go without any wider impact (wouldn’t life be easy if everything was just a web server dishing out static files). The reality is that most servers are still provisioned with the assumption that they will be, if not long-lived, then at least around long enough such that they need ongoing management. Patching, auditing, growing, shrinking. It all needs doing, otherwise you’ve just created an automated muck-spreader.
Do I really even have to write this? If your PaaS doesn’t have an open, comprehensive, robust, documented, public, supported API for everything – don’t touch it. Throw it in the bin, then throw yourself in the bin for even considering it – what were you thinking?!
7. Rules Engines
Workflow can be further subdivided into business rules and provisioning logic and there’s an argument for keeping the two separate. Business rules answer questions such as: “if I’m in development, provision to AWS, if I’m production, provision to our private cloud”, whereas provisioning logic takes care of the actual implementation of these rule outcomes, usually with a focus on 3rd party integration. The differences in the developers, maintainers and governance of these two types of content usually warrants them being kept separate – you may even implement them in separate tools.
And those two Red Herrings…
1. IaaS Agnostic
This is the classic kind of requirement I see coming from industry analysts (I mean, who wouldn’t want to avoid vendor lock-in?), but let’s consider what this would actually be like to implement today. With no common data formats or interface definitions for IaaS consumption, you’d have to reimplement your IaaS integrations over and over again for each IaaS you wanted to remain ‘agnostic’ from. To be truly IaaS agnostic today means integrating with every IaaS provider and maintaining those integrations independently – obviously this is bonkers. Decide which IaaS you want use, pick a PaaS which supports them (or allows you to develop that support) and then just live with it.
When I first went to India, I went to my Doctor, worried about getting ill. My Doctor said: “don’t worry about getting ill in India; you’re going to get ill – so don’t worry about it.”
Similarly, don’t worry about vendor lock-in. You’re aren’t going to experience this so much as solution-lock in. Pick your PaaS with a two year lifespan, make sure it can deliver the business value you need in that kind of timeframe and then expect to replace it. Today, your efforts are better spent in ensuring your IaaS exposes nice, clean interfaces so that you can more easily replace your PaaS when the time comes (and it will).
2. Hybrid Cloud
Similar to the above, let’s be serious here. We (as an industry) don’t even have a common format for describing a compute node. So unless you’re willing to abstract based on the lowest common denominator across all potential cloud providers you aren’t going to be dynamically moving workloads based on compute spot price like I see in so many pitiful marketing slides. At best your hybrid cloud will consist of a catalogue of items which can run on multiple cloud providers, but expect the underlying provisioning automation to be cloud-specific (i.e .you’ll have to maintain them all separately).
That’s all for now, I’ll post the PaaS evaluations next.
Time for a Twix.
The Six Perfect Patterns to succeed at DevOps
So what are the six patterns you need to succeed at DevOps?
1. A Real Executive Sponsor
Proper sponsorship and on-going support is key, it is the single most important factor in your success. Without an actively engaged senior leader you will fail. Your sponsor should have cross-departmental responsibility i.e. owns the business service and the IT functions that it uses. They are part of the team and should be engaged continuously as part of the decision-making process and measuring success.
2. A Culture of Continuous Improvement
We’ve had this term ringing in our ears for years. Continuous Improvement is the heart of ITIL & 6Sigma. Yet ITIL & 6Sigma are constrained – we need something holistic and something that is measurable. With DevOps and the body of experience we have acquired we now have the tools and experience to measure the effectiveness of your project, the management regime it uses, the quality of the software and the efficiency of operations. Not only can we measure these but we can also work out the right triggers to change things now.
DevOps is practical – not an abstract theory and it is core to your Continuous Improvement process.
3. The Scientific Method
Developers and Administrators have to adopt the principles that make manufacturing work. When building systems we start with a requirement, develop a story, build something, test it, analyse the results, change variables to see what impact it has on results. In a DevOps world it is essential to capture and analyse data. The purpose of automating and adopting build systems is to create a repeatable process. With comprehensive automation and sensitive instrumentation it becomes possible to make change to test a hypothesis and measure impact. Measure everything. Test Everything. Make sensitive alterations and measure the impact.
4. A Drive to Standardisation
I have long argued that automation in the cloud is easy, in a virtual world it is not too hard and with physical it depends….The drive to standardise is advanced and should be within reach of all organisations. The balance is working out what to standardise, how to make standards extensible and what to allow some level of creative license. I truly hope we have reached the point where the argument to standardise is won.
5. Investment in…
How often do we hear that our biggest investment is our people? Train them, give them responsibility, form high performance teams. Encourage your top performers to engage in community activities, share their thoughts and contribute to non-IT projects will help them to develop better skills and add value to the business. Avoid the superhero culture at all costs. Find the ‘rose-tinted sceptic’ and test their metal. When they say ‘we can do this better’ test what that means and how they would tackle the problem; qualify this further by understanding how they would measure their changes and how they would take the team with them. Then give them rope!
6. Supplier & Vendor Management
Outsourcing and off-shoring have been of short-term benefit and they are not going away quickly, yet I rarely visit an organisation where there is not an acknowledged cost. If you are in a position where you rely on outsourcing or off-shoring align yourself and your supplier more effectively. Manage your suppliers and define the ‘template’ that they must deliver to. Tools, processes, quality and acceptance are all things that you own and should enforce on your supplier. A good partner will embrace a relationship where the client explains precisely what is needed and develops a ‘technical’ contract that helps them to deliver more quickly.
It’s a win win…
Observations from a Year of 'Enterprise'…
…Is it possible to change behaviour from ‘beat your colleague’ to ‘beat your competition’?
I have spent many years at work in startups and have become accustomed to teams of people spending their energy working together to create things and solve problems. But I have spent most of the last year supporting enterprises adapt to the changing needs of the IT market, most specifically helping with DevOps and Continuous Delivery. Perhaps I am a little slow in reaching this conclusion but it dawned on me recently the most significant difference between the two environments.
In pretty much every enterprise I have visited over the past year ALL I have witnessed is a ‘beat your colleague’ behaviour.
Is it possible to change behaviour from ‘beat your colleague’ to ‘beat your competition’?
Let me provide some examples that we can work with.
– We’re not contracted to do that!
This is a direct response when working with a professional services delivery team on behalf of a major IT vendor. The vendor has sold a transformation project and has subsequently carved up the work between its many divisions. During the implementation phase each division is battling for revenue and trying to limit the work it must perform. Let battle commence. I guess there is no need to explore what the client is experiencing!
– Them Vs. Us
A forward-thinking vendor proposed a transformation programme to a very large financial services customer that will modernise application delivery and operations. The proposal is fully-costed and the vendor is prepared to invest and back up its proposal by taking underwriting financial risk. I have not spent much time myself working out whether the vendor could deliver, that could come later. The team responsible for managing the estate decides the best defence is to create its own business case that simply matches the vendors proposal.
No-one has a good break down of costs and no-one is prepared to collaborate to produce one. A simple mathematical exercise that takes little notice of the detail in the proposal.
– Down to Earth With a Bump
An organisation that I had come to rate quite highly as a visionary causes me to come down to earth with a bump. I had worked with this organisation some years ago to help develop their ideas for a very early cloud. The person leading it impressed me and certainly talked a good story. However, as I re-engaged I found that the business has decided not to consume the service that he pioneered and had removed itself from using any subsequent offerings. I have my own theory on some of the issues that happened here but the bottom line is that the technical team and business are at odds; is this an issue of agenda or of capability?
Hearing the arguments between proponents at board level is quite an eye-opener.
– the Technical Design Authority who makes promises of delivery without any idea if, or how, they can be implemented
For much of my career I have been the Technical Design Authority. This is a role that has caused me many nights lost sleep and considerable worry, it is also balanced by some great experiences and lots of learning. No-one knows everything and there is great power in accepting ones weaknesses, but to execute this role one has to be diligent and explore the implications of a design decision. For the past year nearly every TDA I have worked with has been in sales mode and unable to conceive of the implications of implementation. Simple things like understanding that Data centre migration is not the same as IT modernisation; they may have overlapping or parallel paths but moving a workload from location A to B does not provide any benefit unless one also modernises the platform and manner in which it operated.
I am not sure these behaviours can be changed but I am prepared to give it some thought…
The Six Ingredients in the “Secret Sauce” of Automation Logic.
Since we founded Automation Logic back in 2010 the growth in our business and our success to date has been founded in repeat business and referrals within some of the largest organisations in the public and private sector in Europe.
Sure, from time to time we’re exhibited at a show, spoken at a conference, maybe even placed an advert or two, but all of these things have made little impact compared with the enduring relationships that we have established in our customer and partner base.
While we continue to grow at a fast pace the size of our organisation now and plans for the future inevitably mean that as business relying purely “organic” growth is not a sensible strategy and so we’ve recently been scaling out the commercial side of our operations, appointing Tony Weston to head Business Development and working with a consultant to further develop our go-to-market strategy.
It’s been illuminating and challenging to be working with senior and seasoned executives who are new to our business, who demand of us to explain what it is that makes our business so special and unique – the ingredients of our “secret sauce” if you will.
After much coffee, white boarding and sticky notes we think we’ve captured it, and so I’d like to share it with you…
The 6 key Ingredients in our Secret Sauce
We’re fundamentally a consulting firm. It would be easy to say that we hire brilliant engineers and that’s the secret, but that’s really only part of it. Sure, our guys are super smart, but they’re also not your average “consulting tech. head”. We recruit the brilliant generalists, the ones that can see and influence the bigger picture, who can bring broad experiences to bear to solve knotty problems that might not be just about technology, but could be about people, process and/or technology. Our guys love those challenges and they stay with us because the projects on which they work give them those opportunities over and over again. We’ve built, and continue to grow a truly unique team.
We’ve worked on some of the biggest and most complicated IT transformation projects that Europe (or indeed the world) has seen – more than 60 to date and counting. In-house IT folks might get to experience the kinds of projects we work on less than a handful of times in their career, yet for our guys it’s ALL they do. You can’t measure the value that can bring in making these kinds of project a success. We recognised the value of this early on and have built processes for systematically capturing and sharing that knowledge, not only across our own team, but also with our clients.
- Unique Methodology
Big IT transformation projects aren’t about just strategy, people, process or technology, they’re about a combination of all of these things. They are also about understanding and factoring in the business impact various decisions can make. Getting maximum ROI can be achieved only by considering all of these factors together. Lastly the on-going success of a project can only assured if, as a part of the process, knowledge transfer and training occur to leave the internal team ready to take back the reigns. Having developed our own approach and methodology to address all of these factors we believe that we’re in a fantastic place to assure each project’s long term success.
- Cutting Edge Use of Technologies and Processes
There are a host of relatively new tools and processes at our disposal. Being vendor neutral and having extensive experience with many of them means that we are able to pick and chose the best combination to get the job done. When it comes to processes, like say Agile, we’re also not just following a manual. With so much first-hand experience we know when it make sense to adapt, to “cheat” and to learn and share knowledge from past experience. Sometimes it also leads us to develop our own IP to systematically challenge issues that we see occur repeatedly, either for us or our clients. One example is the HP CSA test library that we recently made available to the broader community via an open source licence.
- Partner Ecosystem
We’ve built some fantastic relationships over the last five years with some of the industry’s leading technology vendors. As well as being a trusted delivery partner for their particular solutions, often unlocking greater business benefit for customers is often bigger than any one product. They value our vendor independence and ability to bring multiple solutions together (often with a bit of our own IP and experience in the mix for good measure) to create an even bigger “win” for their clients and in doing so create a win-win-win scenario for all involved.
- Our Clients
Last, but by no means least, are our clients. They understand that we’re working in often ground breaking projects, capable of delivering huge efficiencies and/or top line growth to their organisation. Sometimes they need to make a leap of faith and trust in our judgement, and we can jointly celebrate when that faith is rewarded.
So that’s it. The ingredients list in our secret sauce. The recipe? Sorry, there are some things we will be keeping to ourselves…
The Six Best Automation Questions from HPE Discover
I’ve just returned from HPE’s customer and partner event, HPE Discover London, where I took part in a panel session about Automation and the Cloud.
It’s always enjoyable to meet old friends, discover new ones, and share experiences of travelling the path to improve the delivery of IT services and applications; something which for Automation Logic is at the core of what we do.
There were six great questions that came up in our session which for me, summarise some of the crucial topics that you’ll need to address if you are embarking on an automation and cloud journey. We were at HPE Discover, so HPE customers there were largely based on HPE OO and HPE CSA platforms, but these questions are universal and can be applied whatever technology foundation you are building upon. Here’s my summary.
1.What are the typical things we should automate?
The driver behind automation is usually to reduce time-to-market (do things faster) and with greater quality. This reduces rework and ultimately reduces cost.
If you started on this path a few years ago it was all about automating simple infrastructure provisioning (a server or VM). These days it has progressed into automating IT4IT value chains, and enabling services such as software delivery platforms.
2. How do I not automate for the sake of automating? How do you stop automating bad processes?
Not everything needs to be, nor should be, automated. Sometimes the cost of automating does not justify the reward. Other times the existing, often manual, process is flawed (because that’s just the way it’s always been done).
I have two comments on this:
- Benefits analysis: A benefits analysis methodology is needed that baselines existing performance and estimates future improvements for a price. This is a topic for several other blog posts.
- Leadership style: This concept is broader than just IT. All too often I see people following the same process (because that’s how it’s always done) as a way of saying they’ve “checked all the boxes, so I’ve done my job”. Meanwhile the reason, or goal, for doing the process is unknown. In most situations, this is bad. Leadership is needed to transform the organisation from executing to avoid errors, to decision making for achieving excellence.
To learn more, read this great book: Turn the Ship Around! by David Marquet .
3. How do I decide what should be kept in my data center and what be put into the public cloud?
There may be regulatory compliance reasons why some data cannot leave your data center, or if the data does then it must stay within a certain jurisdiction. To best address these risks and issues, information security specialists need to be engaged in the beginning and throughout the process of going to the public cloud.
Another measure of risk is how quickly you can respond to an incident. To better understand, ask the question, if I need to perform a given change to my system to mitigate risk, what service allows me to do that the quickest? My internal IT and data center, or a public cloud provider?
4. What’s the best way to set up an automation team?
Historically many organisations had informal pockets of automation distributed around their business. When automation became an area for investment, many organisations then created a centralised automation team. In the beginning this is very helpful. It allows you to better understand the issues and opportunities. However, this model does not scale. Very quickly your central automation team will be known as yet another IT group that says NO.
Instead one of the early goals of a central automation team must be to create a governance model that enables others to build their own automation through coaching, promoting standards, facilitating reuse, and always practice continuous improvement.
5. What makes for a successful automation project?
Regardless of the technology, you must have user involvement and executive sponsorship, otherwise your project will likely fail. From a technical perspective, automation is usually an exercise in integration. Therefore, all tools, applications, and services should have an API that is functional, performant, robust, and secure.
6. In your next automation and cloud project, if you could do something different, what would it be?
Inspire a greater appreciation for the commercial budget and KPIs.
Frequently organisations invest in an IT project. For example, a short term, 9 month, deliverable. Important aspects such as enhancements, dependent systems, support, and retirement, are easily forgotten.
One alternative that helps you appreciate a more realistic budget, and supports KPIs to make decisions around continued investment, is to run as an agile project. For example, creating product or service oriented cross-functional teams, start with a minimal viable product (MPV), and using KPIs to determine if the service should receive additional funding or be retired.
Some useful information:
Over the years our consultants have amassed huge experience on what makes an automation project successful based on many different technology platforms – HPE included. Hopefully from my answers here you’ll understand that very often (maybe even always?), the critical success factor for any project lies in the non-technical people and business aspects of the project. Rarely is it enough to just hire a few great engineers – we learned that lesson early on at Automation Logic and hopefully it something that every one of our clients learns from us in turn.
"Technology of Automation" London DevOps Meet Ups 2016 Announced
Today we are pleased to announce that our “Technology of Automation blog” is about to get real! We are in the process of organising a series of initially London based face to face meet ups for the DevOps , Automation and Infracoder developer community where you will be able to hear from our bloggers about […]
Video: Extending the Salt server management framework
I gave a talk on extending Salt at The London Python Meetup in April. Link to video below: http://skillsmatter.com/podcast/java-jee/my-experience-of-using-server-management-framework-salt The talk seems to have been categorised under Java but I won’t hold that against the Skillsmatter folks who are otherwise excellent.
Want to work with us?
If you'd like to find out more about joining our growing team of engineers, consultants, strategists and evangelists for automation, please get in touch with a member of our team.Get in touch