It's been more than 10 years since the term DevOps was coined. Beginning with "10+ Deploys a Day", the desire to ship software faster in smaller batches has become a full-fledged industry. There are transformation consultancies, tools, job descriptions, certifications, and a host of books and articles aimed at explaining why we are doing it wrong and what we should be doing in order to get it right.

In this talk, I will walk us from the original concerns to the current state of the movement. I'll point out trends that have shown value, talk about where teams, companies, and the industry often get stuck. I'll share my experiences working with Riot Games, Chef, Soylent, and a host of other companies who have helped shape (and sometimes confuse) the goals, the tooling, and the culture that we collectively call DevOps.

 
 

Outline/Structure of the Talk

History of DevOps in 5 minutes.

Continuous Integration/Continuous Delivery

It's the People Stupid!

Are we doing it right?

DevSec.../.../.../Ops?

Where are we going?

Questions?

Learning Outcome

Attendees will come away with a stronger understanding of the history of DevOps, the goals and aims, and a stronger sense of where things are headed. They will laugh a lot and cry a little (it has been a rough 10 years), but most importantly they will have a sense of the immense progress we have made.

Target Audience

Anyone working in software development or infrastructure.

schedule Submitted 3 months ago

  • Mesut Durukal
    keyboard_arrow_down

    Mesut Durukal - Reliable and Faster Deliveries: Complete Test Automation

    Mesut Durukal
    Mesut Durukal
    QA Automation Engineer
    Indeed
    schedule 4 months ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    Like testing is an essential part of the software development lifecycle, automation is a non-negligible part of testing. Nowadays, most of us are somehow involved in automation, since it helps us to perform continuous testing and minimizes manual effort. It is great but also has several difficulties. Let's discuss how we can achieve coping with test automation challenges by going over real-life experiences.

    What to expect from this session

    As most of the QA people are doing test automation and there are several issues commonly faced by them, we will talk about proposals to cope with those test automation challenges. We will discuss ways to avoid broken tests after updated page layouts on the web apps that we are testing, to minimize the number of flaky tests, to automate scenarios where there is a hardware dependency, and to make implementation easier.

    Common problems:

    One of my recent projects was a warehouse automation system in which there are autonomous mobile robots moving around. I was supposed to test a scenario in which there is an obstacle in front of the robot, so it should find a new path and pass around the obstacle. How did I automate this test scenario? We will discuss several proposals for this and some other problems.

    Some of the greatest common difficulties are:

    • Coping with updates/changes
    • Hardware in the SUT
    • Implementation and Maintenance
    • Time-consuming executions
    • Reproduction of the issues found during executions

    Fundamental solutions:

    • Locators/Selectors improvement: Usage of test data IDs
    • Simulation and test harness
    • Non-functional testing
    • Execution duration analysis and parallelization
    • Logging and saving pieces of evidence in the pipelines

     

  • Moutia KHATIRI
    keyboard_arrow_down

    Moutia KHATIRI - How DevOps & the "Every thing as code" paradigm are powering l'Oréal's Beauty Tech transformation

    Moutia KHATIRI
    Moutia KHATIRI
    Tech accelerators CTO
    L'Oréal
    schedule 3 months ago
    Sold Out!
    20 Mins
    Talk
    Advanced

    L'Oréal started its global Beauty Tech transformation more than 3 years ago, launching a massive program to make the group the first beauty tech company in the world.

    In the Tech accelerators, multiple use cases powered by AI are being built to deliver some cutting-edge tooling to the employers but also the clients. The Time to Market that is expected is extremely low, so how do we deliver complex software, fast, without scarifying neither their stability, nor their security of business value.  

    Let me walk you through this passioned journey that led us to build a highly automated & industrialized delivery ecosystem, that made our teams concentrate on what’s most important: creating value for the users.

    All of these use cases are built cloud native, on GCP, how did we design our environment strategy? How do we train our AI models?

    What are the technical stacks that we use?

    How did we automate EVERYTHING, from repo creation, IAM set-up, infrastructure… to application deployment, audit and testing?

    How central was Infrastructure as code (Terraform) was in this ecosystem? And how did we manage to implement a real “everything as code” and an “everything as a service” for all our organization

    How did GCP empower or limit all of our DevOps/CICD ambitions? What are the roadblocks that we came across?

    If you want to know more about this and, join this session to discover more exciting topics!

  • Yoshi Yamaguchi
    keyboard_arrow_down

    Yoshi Yamaguchi - DevOps最新動向 高パフォーマンスな技術組織の秘訣

    20 Mins
    Talk
    Beginner

    Google内の組織であるDORA(DevOps Research and Assessment)では毎年「State of DevOps Report」という、世界各国での調査を元にしたDevOpsの組織への適用状況などの分析結果を公開しています。本セッションでは、2022年11月に公開された最新版のレポートをもとに、高パフォーマンスなIT技術組織がどのような技術をどのように適用しているのか、その分析結果を例を交えながら解説します。またセッション参加者の組織でレポートをいかに活かしていくか、その分析方法についてご紹介します。

    I can provide the talk in English as I have given this talk at DevOpsTalks Plus Conference in Singapore as well.

  • kamo shinichiro
    keyboard_arrow_down

    kamo shinichiro / nao sasaki - ファクトから始める改善アプローチ Ep. 2 〜 Four Keys の先にあるアウトカムに向き合ってみた 〜

    45 Mins
    Talk
    Intermediate

    昨年のDevOpsDays Tokyo 2022で、私達は「ファクトから始めるカイゼンアプローチ ~「LeanとDevOpsの科学」を実践して~」という発表をさせて頂きました。今回は、Four Keysの計測事例を含めたその後の発表となります。

    我々は、Four keysの計測によって開発組織のパフォーマンスを可視化しました。そこに加え、開発組織のあらゆる材料を可視化し、状況把握や課題発見を助け、事業の成長とプロダクト開発の相関関係を見出すためプロダクトアウトカムを定量的に計測するステップに進んでいます。

    具体的には、アウトカム定量化の手がかりとしてEvidenced Based Management(EBM)が有効だと考えました。EBMでは、ビジネス価値を高めるために4つの重要価値領域に該当する指標を定め、プロダクトゴール達成のための実験のループを回すことを提唱しています。

    EBMの重要価値領域とは以下の4つで、これらの領域は相互に影響を及ぼしています。

    • 現在の価値(CV)
    • 未実現の価値(UV)
    • 市場に出すまでの時間(T2M)
    • イノベーションの能力(A2I)

    EBM2.jpg.webp

    上図は「エビデンスベースドマネジメントガイド(日本語翻訳版)」より引用
    https://www.servantworks.co.jp/resources/evidence-based-management-guide-japanese/

     

    また、現在の価値(CV)と未実現の価値(UV)を定量的に可視化することで、真のプロダクトの価値が見えると言われています。

    そのため、プロダクトマネージャー・ビジネス企画などプロダクトの価値について深く考えているメンバーにヒアリングを実施し、定量化できる要素を洗い出しました。

    上記を踏まえ、本セッションでは、EBMを活用したアウトカム指標の可視化に関する取り組みについて発表したいと考えています。

  • Mesut Durukal
    Mesut Durukal
    QA Automation Engineer
    Indeed
    schedule 4 months ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    Elevator Pitch

    We try to ensure the quality and our confidence in our deliverables. But how can ensure that we are ensuring the quality indeed? What is the way to measure the quality, let’s discuss in this talk.

     

    Abstract

    Motivation: 

    Nowadays, great rivalry exists in the software world. Everyone tries to reduce time to market and cost with a broad context of features. To improve the competence of our product, we try to add several compatibility and scalability specifications. But sometimes less is more. To deploy in 1 week with several issues, or deploy in 2 weeks with no major issues?

    We're talking about the elements that sustain an output here: Time, Coverage, Cost, and Quality. A bit different from others, measuring quality is challenging.

    Correlated to the quality, the performance of Quality teams is also under the spotlight:

    • What are they really doing? What is the outcome of QA activities?
    • Are they successful? What do the QA activities change?

    Problems: 

    How can we measure the quality and the outcome of the QA teams/activities?

    Solutions: 

    Firstly, we will talk about being ready for measurement. Unless we do not have proper tools or state transition definitions, we won't be able to analyze data. Then the big question is what to measure. We can list hundreds of metrics, but which of them are the best to get some insights? Once we know what to collect, the way to minimize manual effort is automation. After getting raw data, we are good to create fancy monitoring tools. Graphs and dashboards are great ways to provide visual pieces of evidence.

    Eventually, last but definitely not least is the interpretation of the results. Numbers are great, but what can we understand from them? Are there any action items we need?

    Wrapping up, a lifecycle of a monitoring activity looks like this:

    • Customizing environments to enhance transparency/visibility
    • Choose metrics: Proposed decision criteria
    • Automate measurement 
    • Creating visuals
    • Analysis

     

    Results & Conclusion: 

    I present several good practices to perform a quality monitoring. But eventually, the conclusion we will have is: There is no way to measure quality! We can't say we have 1 kilogram, 5 units, and 10 meters of quality.

    So, what are we talking about, then? Of course, measuring key metrics is still a good opportunity to get some insights. We can learn a lot from the bugs reported, such as

    • Which components are the most prone to errors?
    • What kind of testing activities should we improve?
    • What are the most common root causes of the issues?
    • What is the ratio of reopened bugs?

    An important part of the talk is presenting not only the working metrics but also the ones which do not work! What does the number of bugs or the number of test cases mean? I won't ask you to forget about those, but I will convince you not to be obsessed with these numbers solely.

    Finally, I will introduce a set of metrics, which is not commonly used: Emotional metrics. Since we are talking about the quality, it is not only the product itself, the way we are developing, and the people involved. We can’t build quality with unhappy teams or people. Thus, let’s talk about team satisfaction as well.

     

  • Mesut Durukal
    keyboard_arrow_down

    Mesut Durukal - What did we cook in Testing Kitchen: Testing as a Service

    Mesut Durukal
    Mesut Durukal
    QA Automation Engineer
    Indeed
    schedule 4 months ago
    Sold Out!
    45 Mins
    Talk
    Beginner

    Abstract

    Elevator Pitch

    Speaking in terms of software development, we can think of Front End libraries, backend services, mobile applications, and so on. This time we will talk about developing the test itself as a service. Ladies and gentlemen, let me share my journey to implement our TaaS.

    Problem:

    Software testing is becoming more challenging every day. Along with the technical requirements, we have to cope with a very dynamic way of development and an intense scope to be handled in a limited time. While several teams or groups are trying to cope with these challenges, we observe:

    • There is no standard among these teams: Each is following a different approach/strategy (if they have:O)
    • Some of them are struggling with problems, which were solved by others.
    • Inefficiency due to duplication.

    To have a better understanding of the implementation variations, I attached a slide where I showed how a very simple scenario can be automated in 4 different ways: https://drive.google.com/file/d/1c0aboWtk9WffpxNHCSuooI3QfvrrTsD0/view?usp=sharing

    Solution:

    To remove the imbalance, support those who are struggling with the problems that were solved by others, and ensure the code quality in all test frameworks used by several teams, we come up with the centralized test framework development idea.

    The motivation was to develop a framework in which the most common problems were handled and serve the teams. For this purpose, we have executed several steps starting from the collection of the most common difficulties. In this way, we figured out which problems we should target. Then we designed the architecture to solve those problems and started our implementation. All the steps we have executed are as follows:

    • Requirements definition
    • Prioritization
    • KPI and Metrics formulation
    • Architecture & Documentation
    • Backlog grooming
    • Implementation
    • Training
    • Monitoring

     

     

     

  • Sugii Msakatsu
    keyboard_arrow_down

    Sugii Msakatsu / Yoshitomo Kanaji - 都庁でアジャイルを実践するための「都庁アジャイルプレイブック」のご紹介

    45 Mins
    Talk
    Intermediate

    東京都庁でのアジャイル推進の事例を紹介いたします。

     

    現在、東京都はDXに全力で取り組んでいます。

    業務改善・業務改革、より良い行政サービスを実現するためにアジャイル開発は不可欠なものと考え実践を重ねているところです。

    本セッションでは東京都デジタルサービス局が中心となりアジャイル開発の事例やパターンをまとめた「都庁アジャイルプレイブック」を紹介します。

    • 従来と異なる手法であるアジャイル開発の始めかた

    • 事業部門と開発チームとのコラボレーションの工夫

    • 進める中での課題・改善の事例

    などなど、公共分野ではない方々にも参考にしていただける内容です。

  • Gal Shelach
    keyboard_arrow_down

    Gal Shelach - SLA is for lawyers, SLO is where the money hides

    20 Mins
    Talk
    Intermediate

    We have thousands of frontend servers in 7 data centers serving over 500k HTTP requests per second. They all expect to answer as quickly as possible to meet our SLA 

     

    Having said that, not breaking the SLA is one thing, but how to define the SLO is another. Let's say our SLA has a response time of p99 < 1000ms. This gives us a wide range where we can determine the SLO.

     

    It may seem logical to set the SLO as low as possible. This way, we are less likely to break our SLA. But what if I tell our customer that I can return him a response on 400ms or I can return him a response on 800ms that will boost his revenue?

    Should we then define a different SLO? Maybe we should embrace the risk of breaking SLA from time to time but to have bigger revenue most of the time?

     

    In my lecture I’ll describe three systems we developed to utilize our system dynamically to gain an RPM-oriented SLO. While processing requests, we evaluate the value of each feature and determine if we have the time and resources to utilize it for revenue generation.

    Those are Java infrastructures we use internally to provide the most valuable responses to our customers within the limits of our Service Level Agreement. 

     

  • 45 Mins
    Workshop
    Beginner

    Look, you need metrics for your agile organization, #amiright? In the immortal words of Peter Drucker,

    “If you can’t measure it, you can’t improve it.”

    So, you need to measure things, and measure them well. And you need to measure the right things too! 

    Metrics on employee happiness, theoretical value, and throughput of work are just plain silly. I will reveal the metrics that you need. That mean something. And that get results.

    Join us as we discover THE BEST AGILE METRICS!

  • Gal Shelach
    keyboard_arrow_down

    Gal Shelach - Hey, developer, DIY all the way to production

    20 Mins
    Talk
    Beginner

    We have >350 developers creating over 40 new releases a day. The transition from QA to production means exposing a new feature to 1.4B monthly unique users and up to 500K HTTP requests/sec, which is scary.

    The Taboola philosophy is that a developer should be independent and take ownership of his features from end to end. Our company doesn't have QA teams, so each developer is solely responsible for delivering a feature.
    We, as the team accountable for both production stability and development experience, aim to provide Taboola's RnD developers with tools and methodologies that will help them achieve that. To accomplish this, I will describe the technologies we use, as well as the principles and culture we use. 

    I will explain the steps every developer on our R&D team needs to take from designing a new feature to fully implementing it. We enable developers to develop quickly and independently all the way to production by using tools like special canary tests and smart canary deployment on hundreds of servers worldwide. 

    The session is a loose talk, which has both technical and conceptual aspects.

    The way we work makes me a better developer. I am becoming more creative, responsible, taking greater risks, and most importantly, enjoying my life. I will be happy to convince everyone in the audience to work as we do.

help