Peter Varhol & Gerie OwenTechnology Strategy Research, Cubic Transportation Systems, USA
Peter Varhol is currently a blog editor and blogger at Toptal, LLC. He is a well-known writer and speaker on software and technology topics, having authored dozens of articles and spoken at a number of industry conferences and webcasts. He has advanced degrees in computer science, applied mathematics, and psychology, and is Managing Director at Technology Strategy Research, consulting with companies on software development, testing, and machine learning. His past roles include technology journalist, software product manager, software developer, and university professor.
Gerie Owen is a Senior Test Manager at Cubic Transportation, Inc. She is a Certified Scrum Master, Conference Presenter and Author on technology and testing topics. She enjoys mentoring new QA Leads and brings a cohesive team approach to testing. Gerie is the author of many articles on technology including Agile and DevOps topics. She recently developed a curriculum for DevOps 101 training. Gerie chooses her presentation topics based on her experiences in technology, what she has learned from them and what she would like to do to improve them.
Testing Serverless Applications
Serverless computing is a DevOps technique that uses cloud-provided runtimes to execute code components in a defined workflow. Serverless applications aren’t literally serverless—they do, in fact, run on a server. They are more properly called “function as a service” or “event-driven processing.” A serverless application executes by making its way through a defined series of components, rather than running as a full application at all times.
Serverless components execute on runtimes, such as Amazon Lambda and Azure Functions. The serverless runtimes will work with multiple programming languages, so teams have some flexibility in how they implement their components. Testing serverless applications involves understanding the relationship between the code and the runtime.
In this presentation Peter and Gerie will provide an overview of serverless computing, runtimes and the implications to testing. We will discuss serverless testing strategies for API testing, load and performance testing and security testing. Finally, we’ll show how serverless testing facilitates “shift left” and continuous testing.
Milovan PocekExecom, Serbia
Milovan Pocek has been a Software Tester at Execom for more than five years. Showing good technical skills, Milovan is highly interested in test automation. He has worked on various software projects and performed system, integration, acceptance, regression and functional testing using both automated and manual testing methods. Lately, he is mostly working on projects that are hosted on the cloud, so he is very interested in cloud testing.
Let's get Cloud - introduction to functional testing of Microsoft Azure
Cloud computing has a large momentum and new cutting-edge technologies are available compared to traditional on-premises architecture. Benefits are numerous and include lower upfront costs, impressive scalability, faster setup and many more. That’s why a large number of projects are being migrated to the cloud. At the same time, a vast majority of new projects use cloud services. Even though cloud technology provides a lot of benefits, the challenge of testing these systems still exists, especially in testing business logic. Azure is a cloud computing service created by Microsoft — one of the biggest cloud providers at the moment. I will share my experience in working on an Azure-based solution and in this presentation will:
Lyudmila Penevai:fao an Amadeus group company, Bulgaria
Lyudmula Peneva is an Agile QA Team Lead.
Certified Scrum master. Starting in IT as a QA in 2012, she gained over 6 years of experience in Agile (both Scrum and Kanban).
Involved in transformation from Waterfall to Agile.
Organizing and leading seminars in Sofia on different Agile related topics.
Knowledge in Sports betting & Casino, E-commerce and as of May 2019 in Travel technology.
Experience in leading virtual Agile teams and operations management.
QA role in Agile
A lot of organizations are moving from a Waterfall to an Agile methodology. This can be considered as a huge change also for the QAs becoming a part of a cross-functional team.
So to secure one of the main aspects in Agile – QUALITY, we need to make sure that we understand correctly its meaning, how it works, what the role of a QA in such a team is, and what the Agile approach is.
One of the biggest issues that we face in such a transformation is not being able to create the proper Agile test plan. What’s more important is to understand the key responsibilities of the QA in the team and to be able to cooperate and take proper actions so as to contribute to the team success. The question is “How to work and respond to the client’s expectations”. How to measure the risk?
The main focus of the presentation is to define the Role of the QA in the Agile process and to clarify when and how to test. Let’s see what is the impact of the Agile manifesto on testing.
Željko KostićBetter Collective, Serbia
Željko Kostić is a developer working with the QA. Currently employed at Better Collective Niš office, he is hard at work advancing the current QA stack and will gladly offer a helping hand when developing new testing methods.
Always on the lookout for the next big thing in testing, he works with a dedicated testing team offering support and discovering areas that can be improved. He is on a mission to spread out his ideas, making his first steps at SEETEST 2020 conference.
Making a test automation framework your own
With the advent of new front-end frameworks and new methods for creating user interfaces, automated testing frameworks are lagging behind. How can we keep up?
Our goal is to make writing and maintaining tests easier, so let’s consider tailoring our frameworks of choice to our needs. We can accomplish that through many different ways, like structuring the test code to match application code, component isolation or writing in-house framework extensions that match our products.
By accepting the frameworks and what they are trying to accomplish, we can bring our testing to the next level and emphasize efficiency and reusability.
We will have a look at the benefits that come with developing our frameworks in the ways mentioned and going over ideas that may help us advance even more.
Wim DemeyCTG Belgium
For more than 22 years Wim Demey has been active in software testing and has evolved to a generalist covering different aspects and roles within testing. Driven by versatility and a great eagerness to learn new things, Wim is always looking how and where he can stretch his comfort zone to manage new challenges. He has a special interest in more technical topics like performance testing, test management tools and AI.
Wim is a regular speaker at (inter)national test conferences & seminars.
Is survival of the fittest only for the fastest?
Test automation, continuous integration, pipelines… the need for speed has just exponentially increased over the last decade. Applying Darwin’s theory, it is quite simple: only the fittest –in this case the fastest- will survive. Traditional, manual testers are like a rhinoceros… in danger of extinction.
This talk explains why manual, “slow” testing is still there and has survived all fast rages. Of course, this cannot be realized without some specific survival techniques. What do you think of transforming a T-shaped tester in a NoSheep tester. What about skills like “Learn to drink coffee”, “Be a false dumb” or “CRUD the crap”.
Wim brings you a light-weight, funny talk illustrated with dumps of his 2 decades testing memory lane.
Mirela DraguBearingpoint, Romania
Qualified Consultant with 3+ years of experience in Quality Assurance on Software Implementation Projects. Co-creating on the digital value of Business Transformation models. Successfully supported the Agile Software Testing practices in 6 Projects from 4 countries in the Banking and Insurance Industry.
What is Agile Testing? Process, Strategy, Test Plan, Life Cycle Example
As companies grow, Agile testing is a core part of agile software development. The Agile testing scope is to ensure delivering the business value desired by the customer at frequent intervals, working at a sustainable pace.
Topics: Agile testing as incremental approach, Principles, Agile Testing vs. Waterfall Testing, Strategy & Best Practices, SCRUM examples.
Until the next great innovation in software process comes along, the future is clearly Agile.
Alex TodorovKiwi TCMS, Bulgaria
Alex is a senior QA engineer and open source hacker with 13+ years of experience. He loves everything open source, public speaking, cooking with wine and riding fast motorcycles!
Alex is the current maintainer of pylint-django and the project lead behind Kiwi TCMS - an open source test case management platform.
Divide & conquer for testers - the failure is in your TV set
Imagine you are struggling with learning a new test framework or think about a co-worker who was taking training classes in a new programming language and didn't fare very well. This happens often in the testing world despite having access to all possible information and resources for learning. On the other hand you also have people who are seemingly able to transition between various technology stacks very easily. The canonical example here are developers of course.
This presentation takes the divide and conquer algorithm as the basic skill that will help you in analyzing new software requirements, learning effectively, getting up-and-running quickly on a new project, debugging failures and working in unknown environments.
This talk offers personal observation on the topic of "learning technology" from a tester, developer and technical instructor’s point of view. It will cover several mistakes people make with examples from real teams & projects. It will try to give you an action list on how to avoid these mistakes and be able to make progress in your skills and your learning.
Spoiler alert: it's not only testers who fail at this!
Antoine CraskeLa Redoute, Portugal
Antoine is passionate about strategy, innovation, technology and systems. He is leading the engineering teams at La Redoute in Portugal, that have evolved to cross-functional and platform teams while quadrulpling in size. Their main focus is to accelerate business transformation through DevOps, streaming architecture, microservices, self-service devex, automation… Active within technology communities, blogging on laredoute.io, organizer of Tech Meetup in Leiria and contributing to open-source projects.
Successfull Daily Product Releases reversing the Traditional Test Pyramid
A successful Test Strategy relies on well selected priorities, methods and tools aligned on the Business and Product imperatives. The Test Pyramids mental models are a common way to define and structure our test strategy, organizing the various possible combination of test techniques, tools, and automation effort. Do their radical perspectives apply to all contexts? How can we challenge them regarding their inherent assumptions? Do current practices and knowledge available nowadays could help revisit them?
In this talk, Antoine will share the Test Automation Pyramid allowing us to reach a 96% daily deliveries success rate.
Matthias RatertPROGNOST Systems GmbH, Germany
Matthias Ratert is an experienced testing professional with more than 20 years’ experience in software development and testing. Currently Matthias is leading the test and quality department at PROGNOST GmbH, Germany. He has previously worked for Nokia, Visteon (automotive), Teleca (mobile communications) and Secusmart GmbH (encrypted mobile communications). Matthias has spoken 3 times at EuroSTAR (2009, 2012 and 2015), at iqnite 2010, 2 times at the Agile Testing Days (2010, 2016) , at the TestCon Vilnius 2017 and at SEETEST Bucharest 2019.
Hey boss - which test cases shall I execute?
Methodologies and strategies to select the “right” test cases and two case studies on how to automate this
Manual testing is still needed, no doubt. But the honorable goal to run all test cases every test run leads to enormous pressure as there is often a lack of time to really "do" all the tests. Therefore each test run has to be planned very accurately, but this planning is complex.
The desired solution of many projects is to prioritize tests. All test management tools offer to define a priority (or similar) for the test cases. This makes it possible to differentiate between important and less important test cases. BUT: This weighting is usually set only once during the test definition and is rarely adjusted. However, data and information gained in the course of a project require a dynamic adjustment of the priorities. Unfortunately, this adjustment must be done manually for each single test case with taking several input factors into account. Therefore, this is very often sporadically or even never done.
If the test catalogue gets to a critical size, one becomes reluctant to define new tests or to extend existing ones. More tests will cost more time in the future: the test execution lasts longer, and adjusting the priorities is even more complicated. As a result, some areas are incomplete or even not tested at all! Therefore it is essential to have a good test selection strategy right from the beginning to execute the right test cases at the right time. This ensures that the test team concentrates on the important and urgent test areas even under time pressure.
In this talk I will first explain the data and factors involved in dynamic test prioritization. After that, I'd like to briefly introduce two Master Theses supervised by me that follow the approach of automating this dynamic test prioritization. The prioritization algorithms have been optimized to expose critical faults in a system as early as possible. In addition, new methods have been developed to challenge and inspire the tester. The respective work has been and is successfully used in projects.
Victor IonascuAxway, Romania
Victor is passionate about testing for more than 12 years and involved in extra activities like:
• Speaker at International and National events focused on Project Management, Agile & Testing
• Continental Judge for Software testing world cup in 2016
• Organizer of internal testing workshops
He has a CAT & ISQTB certifications that helped him organize the activity in his teams, with a strong focus on Agile and testing parts.
He has advanced step by step in his career, from QA , to QA lead, to QA manager, Project manager & Scrum master.
Defect mass in Test Strategy
A project with:
How can we find the most impacted areas by our developments for a milestone/service?
How many tests should we run for each impacted area?
If I have automated the tests, shall I run manual also?
All these questions resulted in a formula that can provide us these answers and be a guideline in creating a better test strategy.
We need to provide a mass for each defect, depending on its severity, a mass for each US and then calculate the most impacted areas. Combining these numbers with the number of the test for each area, would result in some numbers that could provide a clearer view of the impacted areas.
This is not a magic formula and there are a lot of variables that need to be considered, but with small manual tweaks, in the end you can bring a bit of light to this dark area.
At the end of this presentation, you will have the formula, tips & tricks and some good stories about it.
Anton AngelovAutomate The Planet Ltd, Bulgaria
Anton Angelov is CTO and Co-founder of Automate The Planet, inventor of BELLATRIX Test Automation Framework, and MEISSA Distributed Test Runner. Anton has 10 years of experience in the field of automated testing. He designs and writes scalable test automation solutions and tools. He consults and trains companies regarding their automated testing efforts. Part of his job is to lead a team of passionate engineers helping companies succeed with their test automation using the company’s BELLATRIX tooling. He is most famous for his blogging at Automate The Planet and many other conference talks.
The 5th Generation of Test Automation Frameworks
The need for test automation nowadays is undeniable. To choose the right solution for our context, we need to know what our options are and fully understand them. We will talk about the five generations of test automation frameworks, what they include, who uses/used them and how they are related to the evolution of the QA profession.
The full-stack test automation frameworks are the 5th generation of tools. You will hear why they are different, what they include and why this is important. Understanding the evolution of the automation tooling in its entirety will help you better define the requirements for your test solution or give you ideas on what to improve in your existing one. As a bonus, you will see demonstrations of sample features of such a framework and get inspired how to build them yourself.
Gjore ZaharchevSeavus, North Macedonia
Gjore Zaharchev is an Agile Evangelist and Heuristic Testing fighter with more than 13 years of experience in Automated, Manual and also Performance Software Testing for various domains and clients. In this period Gjore has lead and managed QA people and QA teams from different locations in Europe and the USA and different team sizes. He recognizes testers as people with various problem-solving skills and an engineering mindset and believes that Software Testers are more than mere numbers to clients. Currently working at Seavus, with an official title of Quality Assurance Line Manager responsible for the Software Testing Team. Also, he is an active speaker on several conferences and events in Europe and Testing Coach at SEDC Software Academy in Skopje.
Effective Test Automation using a Pattern Object Model
If a good developer can write a bad code what is the guarantee that a tester with no coding experience can write good code? Having a bigger team of testers who have their characteristic way of writing code can be pretty painful when the tester changes the team or even worse - leaves the company. Maintenance is difficult even when the tester is still in the team since he has duplicated the same logic in a huge amount of page objects which will be almost impossible to be refactored in a very short period of time. Pattern Object Model helped us to gain the trust of new clients by delivering automated tests on day one of the project and regain the trust of existing clients by reducing the maintenance time.
Alper Keleş and Vahid GarousiSaha BT A.S, Turkey
Alper Buğra Keleş is a computer engineer graduated from Istanbul University in 2012 and currently working as project lead and testing consultant at Testinium.
Extensive international experience in the software industry as a software developer, system analyst and project management consultant and trainer. Trying to apply best testing practices to large-scale software projects and conducting projects to reach the best level of effective testing and better quality.
Vahid Garousi is an Associate Professor (Senior Lecturer) of Software Engineering in Queen’s University Belfast, UK. Previously, he has worked as an Associate Professor in the Netherlands (2017-2019), Turkey (2015-2017), and Canada (2001-2014). Dr. Garousi received his PhD in Software Engineering in Carleton University, Canada, in 2006. His research expertise is software engineering, software testing, empirical studies, action-research, and industry-academia collaborations.
In parallel to his academic career, he is a practicing software engineering consultant and coach, and provides consultancy and corporate training to software teams and companies in various areas of software engineering, including software testing and quality assurance, model-driven development, and software maintenance.
Dr. Garousi was selected as a Distinguished Speaker for the IEEE Computer Society from 2012 to 2015. He is a member of the IEEE and the IEEE Computer Society, and is also a licensed professional engineer (PEng) in the Canadian province of Alberta.
Test automation with the Gauge framework: Experience and best practices
Gauge is a recent cutting-edge Behavior-driven development (BDD) tool, which allows test engineers to develop automated test cases by writing down the flow of test cases in "free form", in a natural language, like English. It solves the rigidity (strict syntax) of other (earlier) BDD tools that require writing test cases in the “Given-When-Then” format (not always easy or possible).
In the context of Testinium, a large software testing company, which provides software testing services, tools and solutions to a large number of clients, we have actively used the Gauge framework since 2018 to develop large automated front-end test suites for several large web applications.
In this talk, the speaker will share several examples and best practices of developing automated tests in natural-language requirements using the Gauge framework. By learning from the ideas presented in the talk, attendees will be able to consider applying the Gauge framework in their own test automation projects.
Yossi RosenbergGett, Israel
Yossi Rosenberg holds a BSc in computer science from the Academic College of Tel Aviv-Yaffo and a DevOps certification from Ness College.
After a short period as a web developer for a bank, along with some freelance work, Yossi switched to the field of Automation Development working for Applied Materials for 2 years on a multidisciplinary system. Later on he worked for ״Thomson Reuters’’ as an automation infrastructure developer for 2.5 years and eventually became intrigued with the challenges of mobile automation and moved to Gett as an automation tech lead for the mobile team.
During these times, Yossi constantly explored various automation infrastructure ideas and frameworks on many platforms (API, Web-UI, Mobile, Latency and performances, and more).
Yossi is an automation enthusiast excited by exploring new technologies and methodologies to make the automation process as efficient, functional, and low-maintenance as can be.
Automation infrastructures dos and don’ts
In this talk, Yossi will answer one of the hottest subjects in the automation development world - Our journey when developing our new automation architecture: dos and don’ts.
You will learn 3 main use cases from real life:
1. Developing a new automation infrastructure while deprecating an old one, along with all of its capabilities (Web UI, API and mobile). Some of the questions that will be answered here are: How do you start? How do you face the challenge of keeping great existing components, while dropping ones that hinder the process?
2. Developing a dedicated infrastructure for each kind of testing field (API, Web-UI, performance and more). The advantages and disadvantages of this approach will be covered.
3. Developing a building blocked parametrized automation infrastructure allowing manual QA to develop tests.
The good and bad practices when talking about automation infrastructures and frameworks development will be emphasized. Yossi will be using Java throughout the slides but rest assured that participants will easily be able to apply this knowledge using any other OOP language.
Georgi Rusev has more than 20 years of professional experience in the Software industry as test automation engineer and manager. He has helped companies like Sciant, VMware, Experian, Naxex and iFAO establish solid processes around automation of the testing activities. His experience ranges from software for virtualization, software modeling tools and security gateways, to business applications in travel, trading and credit scoring. During the years he has taken part in a series of initiatives to promote and establish the software testing profession in Bulgaria: Program Committee Participant in SEETEST, initiator and board member of ISTA conference and others.
Look for Models and Apply Patterns (Automate at scale)
This presentation will walk the audience through several examples, where test validation is challenging due to the lack of tools for that particular job, or even if there are tools they do not offer the needed flexibility.
It will show that in such cases, the easiest way to approach the problem is not the direct one, but perhaps via some simplification of the testing domain, transforming the data model of what needs to be tested to something that is easier to solve, or even better to something that you already know how to get done.
We should always try to do so, as easier and intuitive approaches are always better than complex and fragile ones. And when we eliminate fragility and design well, we can achieve scale and build antifragility.