Sessions

Gerie Owen & Peter Varhol

Medullan, Cubic Transportation Systems, USA

Gerie Owen is a Quality Engineering Architect at Medullan. She is a Certified Scrum Master, Conference Presenter and Author on technology and testing topics. She enjoys mentoring new QA Leads and brings a cohesive team approach to testing. Gerie is the author of many articles on technology including Agile and DevOps topics. She recently developed a curriculum for DevOps 101 training. Gerie chooses her presentation topics based on her experiences in technology, what she has learned from them and what she would like to do to improve them.

Peter Varhol is currently a blog editor and blogger at Toptal, LLC. He is a well-known writer and speaker on software and technology topics, having authored dozens of articles and spoken at a number of industry conferences and webcasts. He has advanced degrees in computer science, applied mathematics, and psychology, and is Managing Director at Technology Strategy Research, consulting with companies on software development, testing, and machine learning. His past roles include technology journalist, software product manager, software developer, and university professor.

The Road from Quality Assurance to Quality Engineering; Testers Lead the Transformation

Quality engineering forms the underpinnings of the continuous integration process, but many Agile and DevOps teams rely on testers to perform quality assurance activities. The shift from quality assurance to quality engineering can be a bumpy road, yet it is the critical factor to effective continuous integration. By building quality in throughout the process, quality engineering enables increased velocity with increased quality. Testers are uniquely skilled and positioned to champion and lead the transformation from quality assurance to quality engineering.

In this presentation, Peter and Gerie will provide an understanding of quality engineering, discuss shift left and shift right, and provide a framework with real-life examples of how teams can evolve from quality assurance to quality engineering. We will discuss specific ways in which testers can enable the transformation to quality engineering. We will discuss the roles of testers on DevOps team, both as test specialists of performance, security and user experience as well as the role of manager of the quality engineering process. Finally, we will provide actionable advice for testers seeking to embrace and lead the transformation to Quality Engineering.


Dawid Pacia

Brainly, Poland

Dawid is Test Automation Manager, QA Lead, Trainer Testing and Python, Public Speaker and Lecturer. He is 1/3 QA, 1/3 Python, 1/3 Lead. Tech freak following all the newest technologies (and implementing them on his own). Fan of the Agile approach to project management and products.

He is leading and supporting the best and the happiest QA team! Actively speaking (and traveling) around the world (combining both passions). He is the organizer and originator of the first regular Ukrainian QA meetup, “UkrainQA.”

Dawid likes coffee with people, good food and drinks, new technologies, discworldWorld, sharing knowledge, Agile Approach, improvements everywhere.

Put your TestOps shoes on! Improving Quality by Process Automation

Automate everything! That’s the most suitable description of DevOps culture. The culture that quickly created job position with the same name. Position, mostly focused on broadly defined automation, leading to fast product delivery. And the division was pretty simple: DevOps = Process automation, QA = Test automation. But is it the right approach? What about (still more and more) popular “(Dev)TestOps” term?

Classical testers are now also very often responsible for the set-up and maintenance of the major part of the Continuous Integration or Continuous Delivery environment (especially the test automation part). The main problem from the business perspective is, like always, time! E.g. many start-ups and companies in a phase of early, dynamic growth cannot afford to waste too much time on test automation. How to speed up the delivery process in that case? How to quickly generate a valuable increment?

I’ll show you how to improve and speed up the testing and delivery process by clever automation.


Vojin Popovic

Svea Ekonomi AB, Serbia

Vojin Popovic is QA Manager at Svea Ekonomi. He started his career on a path to become a developer and he got into QA kind of by accident when a friend of mine convinced me to do a QA internship during a summer break in college. After the 3 week internship he fell in love with QA and the feeling has not change in 15 years.

His second career choice was psychology, psychotherapy to be more precise. He started studding it 7 years ago and have found that the synergy of the two fields is really exciting. Feel free to talk to him regarding QA or Psychology and not to forget his private interests in board gaming, D&D and travel.

Improving Communication and Teamwork Using Perceiver Element Grid (PEG)

When we communicate on a project, we assume that our team members have the same understanding of the team values as we do. Unfortunately, that is usually not the case. If we take the word "commitment" or "responsibility" or "change", they are all open to interpretation.
What it means for me to be responsible is my personal opinion, not a universal rule of what responsibility is. In all companies I have worked for the main issue in the team is communication, and usually it comes down to misunderstanding.

The perceiver element grid (PEG) allows us to come up with a common understanding of team values and roles within the team using a simple grid. Using PEG in a 45 min season you will be able to eliminate miscommunication and significantly reduce communication issues within the team. The adaptation of the PEG grind on agile teams is based on the work of psychologist Harry Procter and George Kelly.


Maros Kutschy

Ness, Slovakia

Automation testing is his hobby and he can spend time with it also at work. He is doing automation since 2008, he works since 2014 in Ness.
Maros Kutschy is specialized in Java Selenium Cucumber framework, he works with tools like Maven, Git, Bitbucket, Jenkins, IDEA IntelliJ, Percy, Applitools, Galen.
On daily basis he is doing UI and API automation.

Maros is a creator of Jasecu Automation Framework and he also created couple of Udemy courses, two are related to Jasecu framework.

Visual UI Testing with Percy

Along with Functional testing of the UI with Selenium and Layout testing of the UI with Galen framework it is essential to perform also Visual UI testing. Among the tools on the market I chosen Percy https://percy.io/ . Even with the free plan you have 5000 screenshots per month.
I think performing automated visual checks instead of time-consuming manual check is very good strategy for companies to be more effective and productive.

I will present the story about how we implemented Percy on our project.
I will share my experience with Percy including live demo about how to integrate it in any framework, how to run the tests, how to check the results and how to integrate it in CI process.
The talk will be for manual testers, automation testers, test managers and front-end developers
Key Takeaways:
• Why to consider invest time into UI visual testing
• Why to choose Percy for UI visual testing
• How to integrate it to your automation framework and your CI process


Amanda Logue

BMM Test Labs, Canada

Amanda Logue is Senior test manager who has defined organizational test policy, selected and implemented appropriate test strategies and assessed test coverage and proposed best practices to meet BMM’s Quality Assurance (QA) testing business objectives and quality goals for multiple clients.

At BMM, Amanda serves as an expert in all areas of gambling QA. She has successfully run Class II, Class III, VLT, remote and online gaming, and lottery QA projects. She has also run pre-compliance projects in each of these areas providing her with a strong knowledge of the compliance areas included in the syllabus and course. In addition, Amanda is one of the authors of the Casino Gambling tester syllabus and course.

Amanda is also the Director of Marketing for the Canadian Software Testing Board, the Chair of the Testing Body of Knowledge working group of ISTQB, and the co-author of the Gambling Industry Tester Syllabus.

Testing in the Gambling Industry

With over 400 jurisdictions worldwide gambling industry games have many rules and regulations they must comply with. These rules impact operating systems, hardware, software, mathematics and visual and auditory functionality. Testing to ensure the games comply with all the jurisdictional rules the game will be played in follows many of the foundation level testing practices but expand in other areas such as compliance testing, return to player calculation testing and player perspective testing.

In this presentation attendees will learn about:
• the different types of gambling,
• the key concepts in the gambling industry,
• the role of an independent test lab and regulatory commissions,
• the test phases within the gambling industry,
• becoming a certified gambling industry tester.


Kari Kakkonen

Dragons Out Oy, Finland

Kari Kakkonen has M.Sc. in from Aalto University (aalto.fi). He has also studied in University of Wisconsin-Madison (wisc.edu) in United States. He has ISTQB Expert Level Test Management Full, Agile Tester, Test Automation Engineer, Scrum Master, SAFe and DASA DevOps certificates, and works mostly with agile testing, lean, test automation, DevOps and AI. He has in testing since 1996.

Kari Kakkonen is author and CEO of Dragons Out Oy, creating a fantasy book to teach software testing to children. Kari Kakkonen is working in Finland at Knowit (knowit.fi), which is a Nordic ICT services company known for testing consultancy and software development, and other innovative ICT related services.

Kari is Treasurer of Finnish Software Testing Board FiSTB (fistb.fi), ISTQB Executive committee 2015-2021. Kari has been included in IT–magazine “Tivi” 100 most influential people listing. Kari is co-author of Agile Testing Foundations –book. Kari is a singer, snowboarder, kayaker, husband and dad.

How children learn testing with dragons

The presentation introduces fantasy writing as a means to teach software testing to children. It uses the example of Kari Kakkonen’s fantasy book Dragons Out. This book has opened new avenues to get people interested in software testing much earlier than anyone has previously tried to do. The learning is about a combination of drawing exercises, listening or reading testing content, understanding through the power of analogies between fantasy and software testing, and exploratory testing.

The presentation shares the insights gathered from the schools and extrapolate what that can mean to learning of software testing in general. The talk will also include a condensed version of how the book project went and where it is today. The session includes an interactive element by engaging the users into a brief exploratory testing exercise.


Anton Angelov

Automate The Planet Ltd, Bulgaria

Anton Angelov is CTO and Co-founder of Automate The Planet, inventor of BELLATRIX Test Automation Framework, and MEISSA Distributed Test Runner. Anton has 10 years of experience in the field of automated testing. He designs and writes scalable test automation solutions and tools. He consults and trains companies regarding their automated testing efforts. Part of his job is to lead a team of passionate engineers helping companies succeed with their test automation using the company’s BELLATRIX tooling.

He is most famous for his blogging at Automate The Planet and many other conference talks.

Taking Quality to Next Level: Metrics for Improvement

Are your testing and quality assurance activities adding significant value in the eyes of your stakeholders? Do you have difficulty convincing decision-makers that they need to invest more in improving quality? Selecting metrics that stakeholders understand will get your improvement project or program past the pilot phase and reduce the risk of having it stopped in its tracks.

Anton Angelov will present a story of how his teams managed to adopt a more in-depth bug workflow. From noticing the bug to the triage process and metrics boards. There will be a real-world example of how to collect 15 essential QA metrics from Jira. To calculate them, we will use Azure Functions and visualize them though charts in Azure Power BI. At the end of the presentation, you will have many ideas on how you can optimize your existing defect management through new statuses, practices and monitor it through quality metrics, which can help you to improve your software development and testing further.


Daniel Angelov

FFW Agency, Bulgaria

Daniel has been working @FFW as a QA lead on big-scale solutions for some of the biggest clients of the Agency. For the past year he's taken the role of a QA Domain Knowledge Lead, where his work is mostly related to improving the processes in terms of quality.

He's a working-from-home father of a newborn and if he looks sleepy - it's because he's been stress-tested for the past year.

Utilizing the QA Engineers Throughout the Whole Project Lifecycle

QA is not simply a profession. It's a state of mind. Are you a fan of the "Money Heist" series? Well, there we have a great technical QA engineer who identifies the bugs by looking at the documentation/architecture and at the end provides steps to reproduce for the biggest exploit ever.

Here we'll not talk about illegal activities, but how to fully utilize the best QAs that you have. Going through tasks and creating automation scripts is what everyone thinks is sufficient, but we can bring value to every stage of the project:
• Pre-sale;
• Discovery;
• Creative;
• Active development;
• Handover.


Gjore Zaharchev

Seavus, North Macedonia

Gjore Zaharchev is an Agile Evangelist and Heuristic Testing fighter with more than 13 years of experience in Automated, Manual and also Performance Software Testing for various domains and clients. In this period Gjore has lead and managed QA people and QA teams from different locations in Europe and the USA and different team sizes. He recognizes testers as people with various problem-solving skills and an engineering mindset and believes that Software Testers are more than mere numbers to clients. Currently working at Seavus, with an official title of Quality Assurance Line Manager responsible for the Software Testing Team.

Gjore is also an active speaker on several conferences and events in Europe and Testing Coach at SEDC Software Academy in Skopje.

How to improve your automated tests

Every day, more and more time and money are invested in test automation, but there are no results from the test automation at all.

Have you ever wondered is it the problem by your tests or the problem is something with the application we test. Very often test automation engineers are throwing the blame to the developers for developing bad apps but that is not the reason for test automation failures the reasons are various from: Focus on developing a framework and not tests, to not good understanding of the locators/xpaths, not familiar with solid design patterns and many other reasons.

In this presentation, I will give some solutions for improving your automated tests and establishing a solid practice for speeding up the test automation development process.


Milovan Pocek & Jelica Kapetina

HTEC, Serbia

Milovan Pocek is a Quality Assurance Engineer at HTEC. Showing good technical skills, Milovan is highly interested in test automation. He has worked on various software projects and performed system, integration, acceptance, regression and functional testing using both automated and manual testing methods. Lately, he is mostly working on projects that are hosted on the cloud, so he is very interested in cloud testing.

Jelica Kapetina is a software engineer at HTEC, and she is very passionate about it. With 5 years of experience, she managed to go from C++ engineer, then mobile and .NET, to an Azure cloud engineer. Currently, she is successfully leading a team to the launch of an interesting cloud solution. Technical discussions with her sister inspire her on a daily basis, and she is a proud owner of one fluffy cat.

How much does it cost to run on clouds? load and performance testing of Microsoft Azure

Cloud’s cost-saving aspect (compared to traditional on-premises solutions) is one of the main selling points of cloud providers. In reality, it is really hard to estimate the price of a cloud solution.

Also, a lot of attention is given to the unlimited performance of cloud solutions. Cloud can handle huge amounts of load without any effect on performance if it is implemented accordingly. So, we need to do load and performance testing if we want to be sure that our solution will perform correctly.

In this presentation we will share the experience we’ve gotten from performance and load testing of a Microsoft Azure solution and will provide answers to the following questions:
• Can we predict the cost of the cloud?
• How to measure the performance of the cloud?
• How to detect and remove bottlenecks in our solutions?


Laveena Ramchandani

Deloitte, UK

The Tech world is ever growing and Laveena has been working in Tech for over 8 years now. She works in testing and quality assurance, a good mix of technical and business awareness role. Laveena has learned a lot through her career and look forward to gaining more knowledge and at the same time inspire and spread more Testing eminence around the world.

Testing a Data science model

Data is the new gold. Everyone is excited about Data Science and Machine Learning models. But, there isn’t much exposure to the world of Data Science as a Tester.

In this talk, we will go through my journey of discovering data science model testing and how I contributed value in a field I have never tested intending to help inspire testers to explore data science models.

Together we’ll understand the background of data science and how data plays a vital role in models? How to train a Data Science model keeping different personas in mind? How will we bring processes and strategies to make sure we capture the right output results and the consumers still benefit from this? In a nutshell, making sure the model’s quality is good and we have confidence in what we provide to consumers.

By the end of this talk, you will be able to explore testing models and how to make sure the quality of a model is providing the team enough confidence and helping a business.

Takeaway:
• Understanding of Data Science
• How to test models?
• Identify what existing skills we already have that we can apply in a data science team?


Milan Novovic

Equaleyes, Serbia

Milan is a QA engineer, mainly interested in how to bring test automation close to newcomers in the software testing space. His other areas of interest are performance testing and coaching small teams on all things QA related. Currently working on a blockchain project, trying to apply previous testing experience to this area.

Robot Framework “From Zero to Robot Hero“

My goal is to learn test automation, is one of the most common mantras that keeps floating amongst testers who just started their testing careers. As with all other scary sounding subjects out there, there is great unknowing on part of the less experienced testers, on how and where to start their learning journey.

I will present the case for Robot Framework, an open-source automation framework created with the idea to be quickly adopted by testers of different work experience.
Its usage of human readable syntax also makes it a perfect candidate for beginners and more experienced testers. I will also talk about robot developers, a new breed of jobs which encompasses knowledge of test automation and different robotic processes.


Wim Decoutere

CTG, Belgium

Wim Decoutere is a master in informatics who started his testing career at CTG Belgium almost 15 years ago and has been testing at a number of projects ever since, mostly in the financial sector. Wim feels at home when standing in front of a classroom. Since he became a full-time trainer, he has taught hundreds of people about the wonderful worlds of testing and requirements engineering. As a veteran youth instructor with a passion for learning theories and people management, Wim is constantly looking for new ideas to improve his own performance and that of the entire testing team.

Wim is secretary of the BNTQB and an associate member of IREB.

Extreme Shift Left. Out with the requirement engineer?

In my presentation I’m going to explore the idea of a tester doing the requirement engineering, or the inverse of a requirement engineer doing the testing. I will explain how requirements elicitation techniques, such as paradox brainstorming and 6 thinking hats can be beneficial for the tester, but also how test techniques, such as Decision Tables, can ensure a higher quality of requirements. I will elaborate on Wiegers’ prioritization matrix, Kano’s classification model and other requirement engineering tools beneficial to testers.

Join me if you want to discover the requirements engineering toolbox and how to apply this in your project.


Wim Demey

CTG, Belgium

For more than 23 years Wim Demey has been active in software testing and has evolved to a generalist covering different aspects and roles within testing. Driven by versatility and a great eagerness to learn new things, Wim is always looking how and where he can stretch his comfort zone to manage new challenges. He has a special interest in more technical topics like performance testing, test management tools and AI.

Wim is a regular speaker at (inter)national test conferences & seminars.

Consensus based techniques as secret for effective entry & exit criteria

No matter you are working in agile or waterfall projects, entry & exit criteria -and even acceptance criteria- are a crucial part of a test strategy. These criteria define the conditions to be met before we start testing or before deciding to go to the next phase, sprint or going to production.

In an attempt to make them as objective as possible, testers often lose themselves in defining too long lists with entry & exit criteria. And guess what…. despite all the effort, the criteria are often ignored or minimized by stakeholders. This can be explained by so-called syndromes like the Stockholm, London or Devil’s Triangle syndrome and in the end, it causes many frustrations at the tester’s side.

Conclusion is that the missing link is often consensus and when we apply a consensus based approach, we'll end up with (more) effective criteria. This talk highlights some of those techniques (e.g. planning poker, roman voting, fist of five, 1-2-4-all, sail boat retrospective) which are common practice in agile projects. But if you adapt them slightly to a testing context, you get effective criteria. Illustrated with concrete examples, the audience will get the handles to apply them in their own context.


Jeroen Rosink

Squerist, Netherlands

Jeroen is a passionate test professional with about 22 years of experience in testing, test management and executing, coordinating, coaching and advising roles. Driven by his passion he always searches for those things which make testing valuable and interesting. Besides presenting several times on the Dutch TestNet conferences (2010, 2012, 2016,2019), he also gave presentations and/or workshops at SEETEST 2017 Sofia, SEETEST 2018 Belgrade, SEETEST 2019 Bucharest, TestCon 2018 Vilnius, TestCon 2019 Moscow, QA Expo 2019 Madrid.

Also he made his contributions to the anniversary book of TestNet “Set your course: Future and trends in testing” (translated from Dutch) and the book “How to reduce the cost of software testing”.

Low level approaches for testing AI/ML

AI and ML are new technologies that will be applied at a rapid pace. The impact of this on testing will be huge. For a large part, well-known approaches will still be sustainable. Organizations as well as testers will also have to adjust to keep up with the times. Sometimes it is hard to keep up and we still need to start with testing. The 2 approaches help to provide initial insights.

This presentation shares insights to some of the basics of AI and ML. Next, this presentation discusses two low-level approaches that are suitable for use in organizations where knowledge and resources related to AI/ML are still limited and yet can be integrated into existing test approaches. The Auditors approach and the Black hole approach to be introduced here.

These two approaches can help testers to gain new knowledge in a simple way in the short term. And soon better to understand the all-embracing approaches that are yet to be developed.


Aleksandar Drinkov

Milestone, Bulgaria

Aleksandar Drinkov has been working as a Test Engineer at Milestone Systems for the past 5 years. He joined the global VMS leader after serving in the military for 7 years and has taken an integral part in Milestone's journey to supporting 10000+ devices, by assuring the high quality of XProtect's device driver layer.
In the past year Alex has rapidly expanded his knowledge and experience and has been working on verifying the quality of key web services.
Hard work, consistency and leading by example are his key to success in the QA field.

Quality via trust?!

Imagine that you are a company that creates a software for video surveillance, a.k.a. a Video Management System, or VMS.
A VMS is naturally meant to operate with various surveillance devices (IP Cameras, NVR's, DVR's, Access control boards, and the likes), most often developed and sold by other companies. Let's call some of these companies Your Partners.
Let's also say that supporting 10000+ devices from Your Partners would give you a real cutting edge over the competition.
So how do you tackle the challenge for verifying that your software works correctly agains all these devices with their Firmware/API versions?
Release once a decade?
Bet on a small army of Manual QA Engineers?
Have strong faith that new software comes with no bugs?
Or?
Come join us to find out how we have this figured out at Milestone Systems.