Avoid the Myths of Hybrid-Cloud and Go Multi-Cloud! Post 1 of 3 – Public/Private Cloud By AL Co-Founder, Kris Saxton
Automation Logic were delighted to present with our Client, David Rogers from the Ministry of Justice, this week at the Central Government Business and Technology event (CGBT). CGBT is the UK’s leading event dedicated to sharing best practice, emerging trends and innovations across the Civil Service.
MOJ have been an early adopter of public cloud technology within government and have partnered with us at Automation Logic to build and operate the cloud platform on which they run their new digital services. To learn more about our work with the MOJ, check out our case study.
Our CGBT presentation “Hybrid Clouds. How to go slow and haemorrhage money doing it” centred around expelling the false promises associated with hybrid-cloud. In the first of a three part blog, we’ll explore these themes in more detail. We’ll introduce the two main types of hybrid cloud, and where the interest in hybrid cloud stems from. We will then expose the reality of hybrid cloud to be a combination of mis-marketing and over-engineering, and show how a much simpler strategy (which we call ‘multi-cloud’) conveys all the benefits that hybrid cloud fails to deliver, and does so with greater speed and in a shorter time. We’ll conclude by describing how a collaborative, multi-cloud strategy is working really well for Automation Logic and the MOJ.
Our first type of Hybrid Cloud which we call ‘Private/Public’ can be defined as a mix of private (i.e. on premises) and public cloud hosted resources, combined and consumed as a single, unified service.
Many organisations opt for this type of hybrid cloud believing it will enable them to support different workloads depending on factors such as data sensitivity, data sovereignty, compute architecture, service architecture.
Our experience, having successfully delivered cloud engagements to clients across Central Government, Banking, Retail and beyond, tell us that the arguments for hybrid cloud just don’t stack up.
For Private/Public hybrid clouds, where the Private side is based on existing infrastructure*, it is almost never worthy of the term cloud. Not to say these on premises systems are not genuinely useful, they are, but they are typically only virtual machine provisioning platforms, albeit with some advanced automation. They are almost always missing several of the key characteristics that would warrant the term ‘cloud’, e.g. usage based billing, massive elasticity, limitless scalability or direct API access. In practical terms, you can’t spin up 1000 machines in 5 mins, destroy them and only pay for what you’ve used. Whilst it may be unfair to hold these systems up to these kind of standards (they were never designed for that) it’s also disingenuous to call them clouds.
So what you really have with a Private/Public hybrid cloud is public cloud attached to something that you’re calling private cloud, but it is not really a cloud at all….
Why is this important, after all, what’s in a name?
Apart from the disappointment of buying into a set of expectations and finding yourself short changed, adopting hybrid cloud as a strategy sets a precedence within your organisation that there is parity of capability between the Private and Public elements when, as we have just discussed, that is rarely the case (particularly if we’re talking about private infrastructure that is more than a few years old).
Framed in this way, hybrid cloud as a mix of aging (not very cloudy at all) infrastructure and public cloud. Well, that’s not a strategy, that’s a half-finished transformation.
Hybrid Cloud is not a strategy, it’s a predicament.
An argument for hybrid cloud is an argument against greater public cloud adoption and a drag on your transformation to modern hosting and digital services.
In the next part of this blog series, we’ll examine the other type of hybrid cloud that we frequently encounter: the broker or abstraction layer. We’ll also introduce the possibility of a simpler way to attain the benefits that hybrid cloud fails to deliver, an approach we call multi-cloud, and how this is working well at the MOJ.
Today’s blog was written by Automation Logic Co-Founder Kris Saxton.
To discover more about Multi-Cloud, read on or get in touch.
Contact us today – Info@AutomationLogic.com
*Although there are still some edge cases for private clouds, anyone (is there anyone?) seriously considering building private clouds in 2017 must have *very* strong business reasons for doing it. Reasons that would trump the higher costs, lower operational resilience and weaker security that come with running your own infrastructure without a multi-billion dollar company at your back.
Automation Logic’s purpose is to deliver an automated world where people are free to realise their creative potential. We deliver technology-enabled transformation, helping our customers adopt emerging business practices and IT. Our portfolio of services span consulting, implementation and operational management solutions for DevOps, Cloud and Automation.
Strength in Diversity
Today’s triggering of Article 50 is a significant milestone in the UK’s journey to exiting the European Union. Brexit poses complex challenges for businesses including the impact to the workforce and this got us thinking about the value of diversity in the workplace.
Here at Automation Logic, our people are what sets us apart and our inclusive culture will remain!
We are proud of our growing and internationally diverse workforce and it plays to our values that our business is made up of top talent descended from across the globe.
24.3% of our workforce are non-UK EU nationals
13.5% are non-EEA nationals
Our culture is underpinned by employees working collaboratively and using, elevating and growing their skills and experience to deliver results for our customers.
We believe in nurturing talent and creating a work environment where employees can thrive and we continue to attract the best and the brightest to join our business.
The Six Ingredients in the “Secret Sauce” of Automation Logic.
Since we founded Automation Logic back in 2010 the growth in our business and our success to date has been founded in repeat business and referrals within some of the largest organisations in the public and private sector in Europe.
Sure, from time to time we’re exhibited at a show, spoken at a conference, maybe even placed an advert or two, but all of these things have made little impact compared with the enduring relationships that we have established in our customer and partner base.
While we continue to grow at a fast pace the size of our organisation now and plans for the future inevitably mean that as business relying purely “organic” growth is not a sensible strategy and so we’ve recently been scaling out the commercial side of our operations, appointing Tony Weston to head Business Development and working with a consultant to further develop our go-to-market strategy.
It’s been illuminating and challenging to be working with senior and seasoned executives who are new to our business, who demand of us to explain what it is that makes our business so special and unique – the ingredients of our “secret sauce” if you will.
After much coffee, white boarding and sticky notes we think we’ve captured it, and so I’d like to share it with you…
The 6 key Ingredients in our Secret Sauce
We’re fundamentally a consulting firm. It would be easy to say that we hire brilliant engineers and that’s the secret, but that’s really only part of it. Sure, our guys are super smart, but they’re also not your average “consulting tech. head”. We recruit the brilliant generalists, the ones that can see and influence the bigger picture, who can bring broad experiences to bear to solve knotty problems that might not be just about technology, but could be about people, process and/or technology. Our guys love those challenges and they stay with us because the projects on which they work give them those opportunities over and over again. We’ve built, and continue to grow a truly unique team.
We’ve worked on some of the biggest and most complicated IT transformation projects that Europe (or indeed the world) has seen – more than 60 to date and counting. In-house IT folks might get to experience the kinds of projects we work on less than a handful of times in their career, yet for our guys it’s ALL they do. You can’t measure the value that can bring in making these kinds of project a success. We recognised the value of this early on and have built processes for systematically capturing and sharing that knowledge, not only across our own team, but also with our clients.
- Unique Methodology
Big IT transformation projects aren’t about just strategy, people, process or technology, they’re about a combination of all of these things. They are also about understanding and factoring in the business impact various decisions can make. Getting maximum ROI can be achieved only by considering all of these factors together. Lastly the on-going success of a project can only assured if, as a part of the process, knowledge transfer and training occur to leave the internal team ready to take back the reigns. Having developed our own approach and methodology to address all of these factors we believe that we’re in a fantastic place to assure each project’s long term success.
- Cutting Edge Use of Technologies and Processes
There are a host of relatively new tools and processes at our disposal. Being vendor neutral and having extensive experience with many of them means that we are able to pick and chose the best combination to get the job done. When it comes to processes, like say Agile, we’re also not just following a manual. With so much first-hand experience we know when it make sense to adapt, to “cheat” and to learn and share knowledge from past experience. Sometimes it also leads us to develop our own IP to systematically challenge issues that we see occur repeatedly, either for us or our clients. One example is the HP CSA test library that we recently made available to the broader community via an open source licence.
- Partner Ecosystem
We’ve built some fantastic relationships over the last five years with some of the industry’s leading technology vendors. As well as being a trusted delivery partner for their particular solutions, often unlocking greater business benefit for customers is often bigger than any one product. They value our vendor independence and ability to bring multiple solutions together (often with a bit of our own IP and experience in the mix for good measure) to create an even bigger “win” for their clients and in doing so create a win-win-win scenario for all involved.
- Our Clients
Last, but by no means least, are our clients. They understand that we’re working in often ground breaking projects, capable of delivering huge efficiencies and/or top line growth to their organisation. Sometimes they need to make a leap of faith and trust in our judgement, and we can jointly celebrate when that faith is rewarded.
So that’s it. The ingredients list in our secret sauce. The recipe? Sorry, there are some things we will be keeping to ourselves…
We're not contracted to do that…
Not so long ago we had systems integrators and a few of them truly took on the overall ownership of a project and everything that goes with it.
Those were golden days for some but things change.
More often than not we hear of outsourcers or internal teams spending as much time negotiating who is responsible for what after the contract has been signed.
I am delighted to find myself at the centre of movement called DevOps that I
think has the potential to eradicate this problem.
Organisations that fail to adopt to the DevOps model will eventually lose to competitors who have changed and new entrants that enter the market.
DevOps enables an organisation to do three things that have the potential to change the ‘not contracted’ road-block.
- By adopting Agile projects become smaller and while the deliverable has fewer unknowns (it changes based on business owner feedback). The scope is smaller and more manageable. From a technical standpoint there are fewer grey areas; the sprint/scrum or however you organise yourselves owns the deliverable, full-stop.
- Each project now must be composed of the ‘right’ people to do the job. A good team will have all of the necessary skills represented, talking and self-organising at every stand-up. The hand-off and latency has been
- Everyone has a relationship; a product is not delivered until the sprint is complete. In the unlikely event that 1 & 2 above fail at least the leaders in the organisation will only find themselves having to manage and unravel a few weeks of elapsed work so recovery becomes a reality.
To be brutally honest, I have seen one DevOps organisation fall into the “not contracted to do that” trap but my analysis of their situation is that they were DevOps in name but not in culture.
It was all about an operations team trying to be topical but without actually changing the nature of the IT business…
Lance Armstrong on DevOps
What lesson does Lance Armstrong teach the DevOps Community? Cheat to win…
In cloud service provisioning and management, we’ve all been shown a shiny vision of the future from the likes of Netflix, Twitter and Facebook, where services are provisioned in an instant then continuously tested, upgraded and healed. Sometimes it’s like watching the unstoppable US Postal Team (the “Blue Train”) in the early 2000’s. We strive to emulate them, and struggle to keep up. How do they do it? Well, it turns out they cheat.
Cheat is perhaps a bit strong, but they certainly side-step a lot of problems that the rest of us have to deal with, and by ‘us’ I mean those that have been running IT operations in enterprises older than 10 years. Yes, I’m talking legacy here, legacy and other mill stones like regulation, auditing and compliance. Try winning the tour on a Raleigh Chopper where you are stopped every 10k and someone checks that your bike is still legal. EPO anyone?
Well with the exception of coffee, most of the drugs I enjoy are performance hindering, so if blood doping is out of the question, what you can you do? Fortunately all is not lost, and there are several actions you can take to stay with the peleton.
1. Tools: Change your bike
Sorry Lance, but it IS about the bike, at least partially.
You can’t implement this new, faster, more agile style of delivery with antiquated tooling. I often stress that there is too much focus on tools at the expense of process and culture, and whilst I still believe that, tooling can’t be ignored completely. Tools change what is possible in terms of process and can reinforce both good and bad culture, so if your tools aren’t empowering you to effect the change you want, swap them out. Note I’m talking about your service provisioning and management tools here, not the underlying technology of the service themselves (dealing with that kind of legacy is something for another post).
2. Getting Started: Pick your race.
Your first race shouldn’t be Le Tour, you’ll blow up on the Alp D’huez. Leverage a key principle of DevOps for smaller and more frequent releases to gradually build your team’s capability. Even within an agile methodolgy, keep your first few sprints short and not too ambitious, after all, you’ll be getting used to your new bike. Set a sprint goal for something that’s not too technically challenging, yet delivers clear value, is measurable and demonstrable back to the business. Building your team’s confidence with early successes is crucial, you’ll soon find your cadence naturally increases.
3. Data Driven: Adopt a scientific method.
Gone are the days of cigarettes and cognac for lunch, every cycle team now measures everything going on with both bike and rider with a view to making the team go faster and so should you. It’s often a bit of chore to begin with but collecting data on your agile delivery process is an essential discipline which will quickly delivery rewards. You’ll be able to spot blockages and measure the effects of trying something different, all will help increase your delivery speed. If it isn’t measured, it doesn’t get better.
4. Accelerate: Get a coach.
The best athletes get there by learning from and working with the best coaches. Those that have been there and done it, often many times over, and whose only goal now is to pass on their hard earned knowledge from their years on the professional circuit.
Here at Automation Logic we have unparalleled experience in applying this emerging field within very large enterprises. Even if you know where you want to go, we can get you there faster and straighter. We know the racing line, and we know where the potholes are! We’ll help you build a team, train them and keep your tools at optimum performance. Perhaps most importantly, we’ll help you adapt agile principles so they work for you and your specific situation (legacy, regulation etc.). Often the most important decision you’ll make is where to allow exceptions to agile principles so they work in large organisations.
Good luck, cheat to win!
Observations from a Year of 'Enterprise'…
…Is it possible to change behaviour from ‘beat your colleague’ to ‘beat your competition’?
I have spent many years at work in startups and have become accustomed to teams of people spending their energy working together to create things and solve problems. But I have spent most of the last year supporting enterprises adapt to the changing needs of the IT market, most specifically helping with DevOps and Continuous Delivery. Perhaps I am a little slow in reaching this conclusion but it dawned on me recently the most significant difference between the two environments.
In pretty much every enterprise I have visited over the past year ALL I have witnessed is a ‘beat your colleague’ behaviour.
Is it possible to change behaviour from ‘beat your colleague’ to ‘beat your competition’?
Let me provide some examples that we can work with.
– We’re not contracted to do that!
This is a direct response when working with a professional services delivery team on behalf of a major IT vendor. The vendor has sold a transformation project and has subsequently carved up the work between its many divisions. During the implementation phase each division is battling for revenue and trying to limit the work it must perform. Let battle commence. I guess there is no need to explore what the client is experiencing!
– Them Vs. Us
A forward-thinking vendor proposed a transformation programme to a very large financial services customer that will modernise application delivery and operations. The proposal is fully-costed and the vendor is prepared to invest and back up its proposal by taking underwriting financial risk. I have not spent much time myself working out whether the vendor could deliver, that could come later. The team responsible for managing the estate decides the best defence is to create its own business case that simply matches the vendors proposal.
No-one has a good break down of costs and no-one is prepared to collaborate to produce one. A simple mathematical exercise that takes little notice of the detail in the proposal.
– Down to Earth With a Bump
An organisation that I had come to rate quite highly as a visionary causes me to come down to earth with a bump. I had worked with this organisation some years ago to help develop their ideas for a very early cloud. The person leading it impressed me and certainly talked a good story. However, as I re-engaged I found that the business has decided not to consume the service that he pioneered and had removed itself from using any subsequent offerings. I have my own theory on some of the issues that happened here but the bottom line is that the technical team and business are at odds; is this an issue of agenda or of capability?
Hearing the arguments between proponents at board level is quite an eye-opener.
– the Technical Design Authority who makes promises of delivery without any idea if, or how, they can be implemented
For much of my career I have been the Technical Design Authority. This is a role that has caused me many nights lost sleep and considerable worry, it is also balanced by some great experiences and lots of learning. No-one knows everything and there is great power in accepting ones weaknesses, but to execute this role one has to be diligent and explore the implications of a design decision. For the past year nearly every TDA I have worked with has been in sales mode and unable to conceive of the implications of implementation. Simple things like understanding that Data centre migration is not the same as IT modernisation; they may have overlapping or parallel paths but moving a workload from location A to B does not provide any benefit unless one also modernises the platform and manner in which it operated.
I am not sure these behaviours can be changed but I am prepared to give it some thought…
The Six Perfect Patterns to succeed at DevOps
So what are the six patterns you need to succeed at DevOps?
1. A Real Executive Sponsor
Proper sponsorship and on-going support is key, it is the single most important factor in your success. Without an actively engaged senior leader you will fail. Your sponsor should have cross-departmental responsibility i.e. owns the business service and the IT functions that it uses. They are part of the team and should be engaged continuously as part of the decision-making process and measuring success.
2. A Culture of Continuous Improvement
We’ve had this term ringing in our ears for years. Continuous Improvement is the heart of ITIL & 6Sigma. Yet ITIL & 6Sigma are constrained – we need something holistic and something that is measurable. With DevOps and the body of experience we have acquired we now have the tools and experience to measure the effectiveness of your project, the management regime it uses, the quality of the software and the efficiency of operations. Not only can we measure these but we can also work out the right triggers to change things now.
DevOps is practical – not an abstract theory and it is core to your Continuous Improvement process.
3. The Scientific Method
Developers and Administrators have to adopt the principles that make manufacturing work. When building systems we start with a requirement, develop a story, build something, test it, analyse the results, change variables to see what impact it has on results. In a DevOps world it is essential to capture and analyse data. The purpose of automating and adopting build systems is to create a repeatable process. With comprehensive automation and sensitive instrumentation it becomes possible to make change to test a hypothesis and measure impact. Measure everything. Test Everything. Make sensitive alterations and measure the impact.
4. A Drive to Standardisation
I have long argued that automation in the cloud is easy, in a virtual world it is not too hard and with physical it depends….The drive to standardise is advanced and should be within reach of all organisations. The balance is working out what to standardise, how to make standards extensible and what to allow some level of creative license. I truly hope we have reached the point where the argument to standardise is won.
5. Investment in…
How often do we hear that our biggest investment is our people? Train them, give them responsibility, form high performance teams. Encourage your top performers to engage in community activities, share their thoughts and contribute to non-IT projects will help them to develop better skills and add value to the business. Avoid the superhero culture at all costs. Find the ‘rose-tinted sceptic’ and test their metal. When they say ‘we can do this better’ test what that means and how they would tackle the problem; qualify this further by understanding how they would measure their changes and how they would take the team with them. Then give them rope!
6. Supplier & Vendor Management
Outsourcing and off-shoring have been of short-term benefit and they are not going away quickly, yet I rarely visit an organisation where there is not an acknowledged cost. If you are in a position where you rely on outsourcing or off-shoring align yourself and your supplier more effectively. Manage your suppliers and define the ‘template’ that they must deliver to. Tools, processes, quality and acceptance are all things that you own and should enforce on your supplier. A good partner will embrace a relationship where the client explains precisely what is needed and develops a ‘technical’ contract that helps them to deliver more quickly.
It’s a win win…
Seven Key Criteria on Which to Evaluate a PaaS Provider, and Two Red Herrings to Avoid…
In the first of two posts on the topic, Kris Saxton highlights the key criteria he sees in evaluating the plethora of new PaaS solutions that are coming to market, as well as offering a couple of “red herrings” to be careful to avoid…
There was a time just after the ipod was launched that you could gauge the level of general insanity on the Internet by the rate at which new ipod docking stations were launched. With a new one launched seemingly every week (Empire at the time of writing) PaaS projects are starting to feel like a good candidate yardstick for the hype around cloud, but how do you select the right platform for your needs?
PaaS Fit for Large Enterprise?
As with all of these nebulous computing trends (cloud, DevOps), I have to spend a bit of time defining what PaaS means to us at Automation Logic. As a consultancy focused on the large enterprise, our customers are typically blessed with at least one of the following: legacy, regulation, change control, complex internal processes, a terrible canteen. So the PaaS variant that I have in mind is the sort where you are provisioning more complex services on top of your existing IaaS rather than the black-box development ecosystem where all you need contribute is application code. In short; more Cloud Foundry than Heroku. A platform here means a complex, multi-node service or environment required to host an application, rather than just a hosted language runtime.
So, if you’re working for an Internet startup with a single product based on a cloud you didn’t build, lucky you, you can get back to your flat white. For the rest of us, read on.
Still here? Good. With this type of PaaS in mind, there’s no doubt that the potential to go beyond the infrastructure and manage entire services and environments has substantial value, but in such a fast moving, immature sector how do you ensure you aren’t backing a lemon?
Having implemented a few of these now, this article outlines what we think are a sensible set of criteria for large enterprises to adopt when evaluating a PaaS. It also challenges a few commonly pushed criteria which we think have little real merit (at least right now). In a follow-up post, we’ll overlay these requirements onto the current PaaS frontrunners and see how they measure up.
1. High Level Objects
In order to be able to provision and manage platforms, a PaaS needs to be able to describe and manage objects beyond simple compute, storage and networking – it has to look beyond the infrastructure. What these objects represent can be quite varied, they can be arrangements of servers (server groups, tiers) or provisioning activities (change control, monitoring enrolment); essentially they represent all the things you need to sort out to produce a fully-fledged service, environment or platform. Whilst the objects themselves are little more than named nodes, they act as pegs on which to hang more interesting stuff such as relationships, ordering, data and workflow (all described next). Consequently, a rich taxonomy for these higher level objects is a good sign of a flexible PaaS, the converse being something that is likely platform-specific (e.g. good at deploying Java onto AWS but not much else).
Even if you have some magical app where stateless nodes can join and leave as they please, chances are (in enterprise land) there will be elements either side of the main app which must also be automated as part of the platform provisioning process, and which have to happen in a certain order. Whether its declarative dependencies (nice) or procedural ordering (good enough), your PaaS needs to be able to support this capability.
3. Data Persistence and Discovery
Provisioning a service involves changing the environment in which you’re operating, IP addresses get consumed, things get named, new nodes come online. When you move beyond the single server and need to start provisioning platform components that relate to one another, you need a method of exchanging information between those components such that they can dynamically configure themselves. The simplest example is a two tier platform with a database and front-end component. The front-end needs to connect to the database but doesn’t know how until that database is provisioned (and given an IP address). In this scenario either the database component writes its IP address back to the persistence layer for the front-end to pick up later, or the front-end component is able to discover this information through a real-time network query. I’ll give practical examples of how various PaaS (and IaaS) implement this in my next post .
Generally you want to separate (or loosly couple) the modelling aspects of the PaaS which describe the platform components and their relationships from the code which actually performs the service provisioning. It’s a natural break and allows you to develop and maintain all the provisioning code (the bulk of which will be integrations with 3rd party services) independently. Tight coupling to the point where you can’t even really tell workflow from the data model or (worse) from internal PaaS code is a sign that you’re looking at a point-solution and not something that’s going to survive the new services you will need to integrate over the life of the PaaS. Ideally, the workflow component will be something that can run independently of the PaaS, that way you’ll be able to reuse it in your IaaS and for other IT process automation tasks.
5. Lifecycle management
A server is not always just for Christmas. PaaS projects can be guilty of wishful thinking, they assume they are dealing with stateless, transient servers that can come and go without any wider impact (wouldn’t life be easy if everything was just a web server dishing out static files). The reality is that most servers are still provisioned with the assumption that they will be, if not long-lived, then at least around long enough such that they need ongoing management. Patching, auditing, growing, shrinking. It all needs doing, otherwise you’ve just created an automated muck-spreader.
Do I really even have to write this? If your PaaS doesn’t have an open, comprehensive, robust, documented, public, supported API for everything – don’t touch it. Throw it in the bin, then throw yourself in the bin for even considering it – what were you thinking?!
7. Rules Engines
Workflow can be further subdivided into business rules and provisioning logic and there’s an argument for keeping the two separate. Business rules answer questions such as: “if I’m in development, provision to AWS, if I’m production, provision to our private cloud”, whereas provisioning logic takes care of the actual implementation of these rule outcomes, usually with a focus on 3rd party integration. The differences in the developers, maintainers and governance of these two types of content usually warrants them being kept separate – you may even implement them in separate tools.
And those two Red Herrings…
1. IaaS Agnostic
This is the classic kind of requirement I see coming from industry analysts (I mean, who wouldn’t want to avoid vendor lock-in?), but let’s consider what this would actually be like to implement today. With no common data formats or interface definitions for IaaS consumption, you’d have to reimplement your IaaS integrations over and over again for each IaaS you wanted to remain ‘agnostic’ from. To be truly IaaS agnostic today means integrating with every IaaS provider and maintaining those integrations independently – obviously this is bonkers. Decide which IaaS you want use, pick a PaaS which supports them (or allows you to develop that support) and then just live with it.
When I first went to India, I went to my Doctor, worried about getting ill. My Doctor said: “don’t worry about getting ill in India; you’re going to get ill – so don’t worry about it.”
Similarly, don’t worry about vendor lock-in. You’re aren’t going to experience this so much as solution-lock in. Pick your PaaS with a two year lifespan, make sure it can deliver the business value you need in that kind of timeframe and then expect to replace it. Today, your efforts are better spent in ensuring your IaaS exposes nice, clean interfaces so that you can more easily replace your PaaS when the time comes (and it will).
2. Hybrid Cloud
Similar to the above, let’s be serious here. We (as an industry) don’t even have a common format for describing a compute node. So unless you’re willing to abstract based on the lowest common denominator across all potential cloud providers you aren’t going to be dynamically moving workloads based on compute spot price like I see in so many pitiful marketing slides. At best your hybrid cloud will consist of a catalogue of items which can run on multiple cloud providers, but expect the underlying provisioning automation to be cloud-specific (i.e .you’ll have to maintain them all separately).
That’s all for now, I’ll post the PaaS evaluations next.
Time for a Twix.
DevOps: Coming to an enterprise near you.
In an industry where hype is the norm the DevOps movement has been fairly low-key until quite recently. Well, low-key in the enterprise perhaps, but there is a massive and very passionate community that has been doing great things for some time. Things like deploying code to production, at scale, every 15 minutes or creating super computer scale systems for pharmaceutical research in the cloud and paid for with a personal credit card.
DevOps is now coming to an enterprise near you and will have a huge impact in 2015 and beyond, so get ready.
Compare DevOps with the production line; DevOps is the IT equivalent to help you understand the relevance.
The production line revolutionised industry and underpins our consumer world delivering a dazzling array of innovative products that are accessible to anyone.
While web and gaming companies pioneered this space it’s merits have been identified by global software companies, retailers, banks and even heavy industry.
Over five years we have observed many engineers content being blissfully ignorant of DevOps. Others have dismissed it but take note; this is changing and changing fast and now many enterprises have now nailed their sails to the DevOps mast.
DevOps is essential so it is inevitable; it will become ubiquitous in the enterprise and is fundamental to redress the impact that ITIL and out-sourcing has had on innovation and expertise.
How can we be so confident? DevOps shortens the software development lifecycle and reduces waste (time, process, repetition) and improves quality enabling you to focus on what is important to the business and innovate. Most fundamentally it is about automating all things including process, infrastructure, deployment, test, build and change. This has been something that many organisations have tried to tackle alone and have not succeeded.
So ask yourself:
1. Do you have the culture, capability and confidence to commence your DevOps journey?
2. Do you know where to start and what to avoid?
3. Do you understand what DevOps best practices look like and can you avoid anti-patterns?
DevOps is a C-cubed world. Culture, Capability and Confidence. You need all three to succeed.
Don’t fall into the common trap and assume this is simply about adopting the cloud or implementing tools – talk to someone who knows this space and has the hands-on experience to advise.
Want to work with us?
If you'd like to find out more about joining our growing team of engineers, consultants, strategists and evangelists for automation, please get in touch with a member of our team.Get in touch