Your business needs to deliver faster. To accommodate, Development needs to introduce fewer changes but in a much more frequent cadence. This creates a challenge for test teams to keep up with the rapid pace of change without compromising on quality. Automation is paramount to the success or failure of Continuous Delivery, and Continuous Testing enables early and frequent quality feedback throughout the CI/CD pipeline.
In this webinar, Eran & Ayal will explore how to implement Continuous Testing to ensure high quality releases in a Continuous Delivery environment; including what to test and when to automate new functionality in order to optimize your efforts.
2. calls for building, testing and releasing quality
software at the pace and frequency of
business needs
Demand for faster
delivery of innovation
3. DevOps
Back to the basics
Always know what
the customer
wants
Continuously
deliver high
quality, secure
applications
Work together as
a team
Drive out waste in
the system
Keep assessing
and improving the
customer
experience
4. Evangelized by the startups…
…but a challenging proposition for
large enterprises.
DevOps
5. quality
compliance
manual processes
open source proliferation
security
cloud
monolithic architectures
costpressures
deliver faster
software complexity
micro services
containerslong cycles
lack of insight
delivery
outsourcing
agile
testing
tools
governance
enterprise
data
portfolio management
configuration
operationsdev
release
code
deployments
integrations
latency
user experience
test lack of automation
workflows
scaling
culture
is challenging
6. Our Micro Focus point of view
Predict and manage Software
Complexity
An explosion in composite and service-
oriented architecture adoption, and
software surface areas from web to
mobile to things,
Thrive with Agile and DevOps
transformation
Shrinking window of monetization
drive business and IT to collaborate
and release faster This accelerates
adoption of Agile & DevOps practices.
Optimize Hybrid Delivery
Software dev is shifting from primarily
custom code creation to cloud service
compositions and consumption
preference is shifting to cloud and
utility models
“By 2020, DevOps initiatives will cause 50% of enterprises to implement continuous testing using
frameworks & open-source quality tools” Gartner Dec. FY’16
15. Release 1 Release 2 Release 3
UI
AP
I
Unit
UI
API
Unit
UI
API
Unit !
16. Change in
• Executable code
• Configuration
• Infra / environment
• Data
• Monitoring
Everything
codified and
version controlled
Automated tests
(lots)
Manual tests (few)Embedded
security scans
Automated
deployments
Autonomous Operations
Feedback loops
Continuous Delivery pipeline
UFT Pro
StormRunner
Load
Codar
ChatOps
AppPulseSiteScope
17. Build to QA 2-4 weeks
Automation for specific areas
Major release 18-24 months
Capacity - 8 products
Build to QA – Hourly/Daily
Fully Automated CD Pipeline
SaaS release 4-8 weeks
Quarterly On-prem release
Capacity – 15+ products
Multi billion dollar business unit in a Fortune 10 company
21. True DevOps process
Dev Git Jenkins Maven ProductionTestsIntegrated
env.
CodarNexus
Repository RepositoryBuild
Orchestrator Build tool Deployment
Orchestrator Server HPESW tools Server
Commit
Git plugin
(listen, wait for changes) Code change available
OK
Create war(s)
Store war(s)
OK
Last war(s)
Retrieve last war(s)
Post new war(s) in Staging
Perform tests
Test new war (UFT via ALM, LeanFT, SRS, NV, SV)
Tests passed/failed indication
Mark last build status pass/fail
Deploy to production (nightly)
Retrieve last
successful build
Deploy to Production
OK
Use Production data for more
accurate tests (PAL/NV)
22. True DevOps process – Micro Focus tools
Jenkins
Production
Server
CloudMicro Focus SaaS
ALI
DevBridge
ALM
UFT
Mobile Center
BPT
StormRunner
Load
AppPulse
Trace
AppPulse
Mobile
Network Virtualization
Staging
Environment
GIT/IDE
LeanFT
Service Virtualization
Use Production data for more accurate tests
ALM
Octane
StormRunner
Functional
Today’s business landscape demands rapid delivery of innovation…
But it’s not enough to say, “go faster”. Doing so typically ends in disaster. To succeed, we needed to go back to the basics and re-engineer how we deliver. This is the heart of the devops movement…
Something about unicorns…
When we talked to our customers about their concerns around delivering faster, they raised all sorts of valid reasons why DevOps could fail. But the most common concern was quality…
Need for sustained excellent user experience requiresnew approach to scale
Now we set up the discussion around our unique Micro Focus Point of View for App dev, test and delivery management
We believe in the following:
To thrive in rapid delivery of software innovation, we must balance and be able to deliver software with DevOps speed but also with quality
We need to rethink how we manage software complexity to adapt to new architectures and delivering software on new surface areas. A new approach to ALM will get us there
We need to plan for and optimize how we use hybrid delivery models to achieve new levels of scale and ensure a consistently adaptive and ideal user experience
Two other factors are core to our POV as well:
First, the world has moved to open source as a core enabler of Agile development. We believe in leveraging the flexibility of open source but adding enterprise scale to get true efficiencies in today’s multi-model IT environment
Secondly, it’s not enough to continue to innovate. We need to deliver the bridge from where customers are today to where they are going, enabling you, our customers to maximize your ROI of your existing investments while engineering for the future
So let’s start from the beginning. When it comes to application changes, everyone should know there are three factors that control your world; Those are time, quality and cost. The problem is, these don’t complement each other at all, and you can only choose two.
For example if you want to deliver faster at a reduced cost, this means you’ll have less focus on quality and low overall coverage in each release. You simply aren’t given enough time or resources, whether those are people, environments or automation to keep coverage high.
If you want to reduce cost and increase quality, that’s going to increase the amount of time it takes to deliver… by a lot. This is because you need more time to cover all aspects of your application changes to keep the risk of failure low.
And lastly if you want to deliver a high quality application and do so quickly, this tends to mean you throw more people, environments or tools at the problem; which can be expensive. Why more environments, you ask? Well, typically changes don’t happen in a serial fashion, and by the time you are wrapping up testing for a particular release, your developers have already started working on the next one. In order to avoid downtime, this means each release that is in the pipeline will have it’s own environment to operate in. There’s the current release, the testing version, and the developing version at a minimum, and each of these cannot exist in a vacuum. There are integrations to consider, which drives the cost and often the time up even further.
So what can we do?
Well, the core tenet in devops is to release a smaller number of changes more frequently to production. This dramatically reduces the amount of testing that needs to be done, but it still needs to be done.
That’s where automation comes into play. By automating as much testing as possible each release, we can focus more on the new or changed features and have automation provide the feedback needed to support the rest. We don’t simply automate functional test execution either. No, this means we need to automate unit testing, functional regression testing, performance and security testing, as well as standing up the environments and deploying the application changes required to run all of this.
Which leads to the third factor, which is removing dependencies from the equation. Whether through virtualization techniques such as service and network virtualization, or deployment of micro-services to support the environment, or both, the application should be virtually self-contained in a way that accurately represents the functionality and performance of those dependencies in production. Otherwise, we’re not getting a complete picture.
Sounds easy, right?
But what about that automation piece? What do we need to automate in order to succeed with a continuous testing strategy like this?
It’s important to note that all tests are not equal. There are basic layers of our application testing…
Something about the cost, coverage and frequency of execution…
In theory, it might look like this. In release one we have basic Unit Testing only, and in the following release we focus on automating the API testing, followed by the UI. The problem is, this is not how applications are built and there will still be a lot to cover from the first release by the time we get to release #3, and that’s not including the changes that happen in between.
When you put it all together, starting from automation of your versioning and build deployment to automated functional, security and performance testing, it can look a lot more complicated but the principles are the same. You just need to start small, and build from there. HPE Software leverages and supports a lot of the ecosystem you will find in a continuous delivery pipeline, which will increase your chances of success.
One such company, which Ayal will tell us about shortly, saw significant improvement in their release cadence and overall quality by moving to continuous testing using HPE Software. This wasn’t some small startup, but a multi-billion dollar company.