location_city Tokyo schedule Apr 21st 01:00 - 01:45 PM JST place Hall A people 10 Interested

Do bugs speak?

Yes, they do. People speak different languages like English, German, French, Chinese etc. But is communication to bugs possible? It is important to understand them, because they really tell us something. There is valuable information underlying the defects of a software, and information mining from defects promises for improvements in terms of quality, time, effort and cost.

Problem Definition

A comprehensive analysis on all created defects can provide precious insights about the product. For instance; if we notice that a bunch of defects heap together on a feature, we can conclude that the feature should be investigated and cured. Or we can make some observations about the severity or assignee of similar defects. Therefore, there are some potential patterns to be discovered under defects.


Defect analysis is very important for QA people, and especially for QA managers. We utilize lots of aspects to get an idea about the product itself or our procedures. For instance while monitoring defect distribution across testing types, we will discuss how to get an idea about the quality of our testing approach. I.e whether we are applying all types in a balanced way. (functional, performance, documentation, etc.) Or over another graph, in which we track the gap between open defects and resolved defects, we will discuss what action items we can take when the gap widens. Finally, with ML assistance, we will see how we can reduce manual effort and cost.

Results & Conclusion

In this session, we discuss data mining from bugs and usage of ML in defect management. Objective of the study is:

  • To present in which ways defects can be analyzed
  • To present how ML can be used to make observations over defects
  • To provide empirical information supporting (b)



Outline/Structure of the Talk

* How to learn from bugs

* Metrics to learn from bugs

* Ways to collect metrics


Learning Outcome

Lessons Learned

After all the experience I have had to build a successful bug management process, I had insights about the most critical parts of building such as lifecycle:

  • What kind of environment we should build to be able to extract hidden patterns and valuable information from defects.
  • What kind of monitoring ways can be used to make the status visible in the most possible way. (Various pie charts, bar graphs and tables will be shown to demonstrate distribution of defects across different aspects. )
  • What can be the valuable information in each monitoring activity.
  • How can we classify/cluster defects using NLP techniques over various algorithms (including SVM, Decision Trees, Ensemble Methods) with benchmarking and results (accuracy rates).

The presentation aims to help also to attendees in terms of:

  • Mine valuable data from defects
  • Get insights about test cycles
  • Reduce defect assignment errors
  • Perform correct defect triage


Target Audience

Testers, PO, Developers

schedule Submitted 6 months ago

  • Mesut Durukal

    Mesut Durukal - Common Pitfalls and Life-saving Solutions in Agile Testing

    90 Mins

    Motivation: I have experienced a lot of challenges in both technical and social manners and tried to develop solutions to cope with them throughout all the testing activities that I was involved in. Eventually, I have wrapped them up to make a list of common pitfalls which can be encountered by any tester and some golden rules to successfully get rid of them and manage a test project. 

    The most common problems are:

    • In continuous testing environments, due to time pressure, sometimes Quality focus may be shifted. How can we ensure to stick to the correct mindset even under stress?
    • While trying lots of approaches, how can we optimize the best way? How can we decide to go with it? How can we monitor the progress, trends and results of each single POC?
    • Test Reliability & Test Smells
    • Efficiency

    Some fundamental solutions are:

    • Being truly agile: Adapt new solutions quickly.
    • Manage the progress: Be aware of what is going on by setting KPIs to track & monitor with tools like CloudWatch, Grafana
    • Technical part: Automation principles for the sake of robustness. Solutions to reduce flaky tests & analysis effort & costs.
    • Holistic testing!


  • 90 Mins

    Motivation: After starting a new project, the initial planning is done and the very first development activities start. But as we all know, all products are tested, which means at some point (ideally soon after the development activities start) testing activities also start.


    So, from the start of the testing till maturity, there is a long and tricky way. In this talk, I discuss what kind of challenges are experienced and how we can cope with them.


    I will explain:

    * How we managed to collect insights from bugs.

    * How we managed releases

    * How we reduced testing effort before shipment

    * How we maintain the stability of CI/CD.


    • Uncertainties
    • Hidden information
    • Missing guidance
    • Complex and complicated systems under test
    • A lot of deployments and limited resource
    • Adaptation problems in the team



    • Information mining: Walk through and explore the system, Customer Surveys, Requirement Analysis, Specification Benchmarks
    • Define processes: Bug Life Cycle, Test Case Life Cycle, Workflows: Code Review Guideline, Acceptance/Exit Criteria
    • Decide Tools: Issues, Tasks, Tests, Results, Code
    • Maintain Tests: Coverage, Suite Management, Markers
    • Automation: Implement the skeleton: Flexible enough for further improvements, Robust and open for RCA, Reporting
    • Define Levels and Subsets: Priorities for the executions


    Results & Conclusion

    In this talk, I will present building a good software testing life cycle and achieving quality in software projects from scratch.


  • Yoshiyuki Ishikawa

    Yoshiyuki Ishikawa / Yosuke Kushida / Kei Tanahashi - カオナビのDevOps実践 ~ 運用自動化・テスト自動化の秘訣 ~

    45 Mins
    Sponsor Talk






  • hagevvashi dev

    hagevvashi dev - 食べログのソフトウェアテスト自動化デザインパターン

    20 Mins



    • コード修正後のフィードバックが遅い
    • テストが再利用できない




    • アーキテクチャ設計
    • パイプライン設計
    • フレームワーク設計
    • テストケース自動化設計
    • インフラ設計


  • kamo shinichiro

    kamo shinichiro / Hiroyuki TAKAHASHI / Yasuko NAITO - ファクトから始めるカイゼンアプローチ ~「LeanとDevOpsの科学」を実践して~

    45 Mins


    「LeanとDevOpsの科学」という書籍ではfour keysと呼ばれるパフォーマンス指標や、four keysの改善促進が高いとされるケイパビリティが紹介されており、現在私たちは、これらの指標を計測し、ファクトを元により良く、速く、安全にプロダクト開発を続けていくことを目指しています。

    特にfour keysの計測方法を紹介した記事は一定あるものの、実際に計測できるのか?、計測結果をどう活かしているのか?

  • Yuki Nishimura

    Yuki Nishimura - Airワークのサービス拡大に向けて課題を対応する中で見えたDevOpsの重要性と歩み

    45 Mins

    昨年より「Airワーク 採用管理」のサービス拡大に伴いアーキテクトとしてシステム改善に関わりました。

    • DB負荷が不定期に高騰する
    • アラート通知が月1000件以上発生
    • AWS障害発生 → 再発対策の流れがうまく回ってない
    • モニタリング基盤はあるものの何を見るべきかの更新がされていない
    • デプロイ、切り戻しに時間がかかる




    このセッションではAirワークで実施した運用課題の改善事例とそこから得たシステム運用を安定化させるための実践トライ, 考え方についてお話ししようと思います。

  • Yukio Okajima

    Yukio Okajima / Yotaro Sato - コンプライアンス対応をチームの力に ~ 監査人が考える今後のDevOps

    45 Mins


    • コンプライアンス要件では開発者と運用者の分離が求められているが、それでどうやってDevOpsするのか?
    • コンプライアンス要件では役職による承認を求めるが、それでどうやって自動化されたパイプラインを構築するのか?


    そこで、DevOpsチームのコンプライアンスへの向き合い方について示唆を提供すべく、PwCの監査人と永和システムマネジメント Agile Studio のエンジニアが協力し、DevOpsとコンプライアンスを共存させるためのレポートと参照実装を公開させていただきます(※ 正式公開は3月頭の予定)。


  • T. Alexander Lystad

    T. Alexander Lystad - [Video] Measuring Software Delivery and Operational Performance to improve commercial outcomes

    T. Alexander Lystad
    T. Alexander Lystad
    Chief Cloud Architect
    schedule 7 months ago
    Sold Out!
    45 Mins

    In this talk, I summarize the evidence that shows how engineering performance drives commercial performance, including Visma's own internal research. I'll show why and how we measure Software Delivery and Operational Performance across ~100 teams and how we use it to improve commercial results.

  • Gal Shelach

    Gal Shelach - SLA is for lawyers, SLO is where the money hides

    20 Mins

    We have thousands of frontend servers in 7 data centers serving over 500k HTTP requests per second. They all expect to answer as quickly as possible to meet our SLA 


    Having said that, not breaking the SLA is one thing, but how to define the SLO is another. Let's say our SLA has a response time of p99 < 1000ms. This gives us a wide range where we can determine the SLO. This gives us a wide range where we can determine the SLO. 


    It may seem logical to set the SLO as low as possible. This way, we are less likely to break our SLA. What if I tell our customers that we have a magic feature that boosts revenue and only takes 400ms? Should we then define a different SLO? Maybe we should embrace the risk of breaking SLA from time to time but to have bigger revenue most of the time?


    My lecture will describe three systems we developed to utilize our system dynamically to gain an RPM-oriented SLO. Those are Java infrastructures we use internally to provide the most valuable responses to our customers within the limits of our Service Level Agreement.

  • Omar Galeano

    Omar Galeano - How to communicate the ROI of test automation to your business - テスト自動化のROIをビジネスサイドに伝える方法とは?

    Omar Galeano
    Omar Galeano
    Quality Engineering Principal
    schedule 4 months ago
    Sold Out!
    20 Mins

    In the era of modern software development, test automation has gone from a ‘nice to have’, to a critical component of delivering quality software at velocity.

    Business decision makers generally agree that test automation is a good idea, but may hesitate to commit appropriate resources, especially when budgets or deadlines are tight.

    We will interactively walk through scenarios that highlight the key factors that deliver test automation value. You will learn how to quantify the ROI of test automation and influence fiscally-minded decision makers with confidence!


    Note:Simultaneous interpretation will be provided in English and Japanese for this session, so please feel free to attend even if you are not comfortable with English!