
Julian Harty
Founder
CommerceTest Limited
location_on United Kingdom
Member since 7 years
Julian Harty
Specialises In (based on submitted proposals)
Julian has been working in technology since 1980 and over the years have held an eclectic collection of roles and responsibilities, including: The first software test engineer at Google outside the USA, where he worked for 4 years as a Senior Test Engineer on areas such as mobile testing, AdSense, and Chrome OS. He has been actively involved in testing and test automation for mobile apps since 2006. He develops Android apps, works on testing and test automation for web and mobile apps and shares much of his material freely. He has worked for Google for 4 years and eBay for 18 months in global roles. Over the years he has also participated in hundred's of workshops and conferences globally. He is based in the South East of England. You can find him at conferences, events, and peer workshops globally.
-
keyboard_arrow_down
Using Analytics to Improve Quality
45 Mins
Keynote
Intermediate
How do we know we're doing a good job when we develop, release and run the software? Software analytics can provide data and evidence to help us know how we've been performing and help us improve our practices for future releases so we also improve our software. Using analytics well can increase our leverage and make both us and our users more satisfied. We will cover usage analytics and how it can help us find problems that escaped our development and testing and how it can provide insights into how our users use our software.
-
keyboard_arrow_down
Analytics Driven Software Engineering for Mobile Apps
480 Mins
Workshop
Advanced
There's little need to work in a vacuum as an isolated developer or team. Our software tools, our apps, and our users all provide information we can use to help us improve our practices and the apps we produce. Some tools provide leading information - that's available and can be applied before we release the app. Other information lags the release of our app to testers and users, we receive the information as the app is being used.
The information may help us reflect on our existing work, what went well and what we missed or didn't do quite as well as we'd like to do. We can also use it to improve how we work in future, for instance to test more effectively and efficiently, to learn from production and real world use of our software, etc. We can choose to work faster, increase the value of the feedback we receive, and shorten the feedback cycles so we can iterate faster and more purposefully.
In this interactive workshop we will be able to investigate and review some of the many and varied tools and sources of information. We will compare information that is easily available with techniques such as implementing and applying mobile analytics and designing automated tests to help us collect high-value, relevant information explicitly. We’ll also investigate the capabilities and tools that are available to developers and those who support mobile apps to harness these tools and the data they provide to refine and improve our testing. We'll also consider automated testing, their sweet-spots and their blindspots.
In addition to general mobile testing techniques we’ll investigate ways to help decide what to test next and what might survive in the morass of “won’t be tested”. As we learn more about specific aspects of an app, we can further refine the testing and use various analytics and research to improve our testing. There’s plenty of data available to help us improve the testing and even the development of mobile apps if we choose to collect and use it. Privacy and protection of the users is also key and part of being a trustworthy, professional tester so we’ll touch on these topics and how they’re generally designed and implemented in mobile apps.
-
keyboard_arrow_down
Improving Mobile Apps using an Analytical Feedback Approach
45 Mins
Talk
Intermediate
There are various ways we can improve the testing of mobile apps. These include:
- Better testing
- Test automation
- Scaling testing
- Using software tools e.g. static analysis
For mobile apps we can also incorporate two complementary forms of feedback: analogue - created by humans - such as app store reviews, and digital - built-into the apps - such as mobile analytics. This talk introduces ways we can incorporate both analogue and digital feedback to help us better understand the qualities of our previous work and ways we can improve our work in future so we release better apps while also working more effectively and productively.
The feedback we receive can help us adapt and react nimbly, reducing the annoyances for users of our apps. We're also able to see beyond our team's horizon to the rich and varied domains where mobile apps are being used.
-
keyboard_arrow_down
Does software testing need to be this way? Tools, approaches and techniques to test more effectively
45 Mins
Talk
Advanced
Software development teams recognise testing is relevant and important. Testers want to add value and do purposeful and meaningful work, however software automation is encroaching and in some cases obviating much of the hand-crafted tests - including some of the 'automated tests' created by teams. As Nicholas Carr says in his book The Glass Cage: "Who needs humans anyway?"
And yet, humans - people - have much to contribute to crafting excellent software, including testing the software. This presentation investigates:
- leading automation techniques to understand more of what they can offer us in terms of testing our software.
- how structured testing techniques can help all testers including "exploratory testers"
- where analytics can help
- tools, approaches and techniques to help test more effectively
-
keyboard_arrow_down
Understanding UX, and approaches to measuring and testing UX
45 Mins
Talk
Beginner
UX is a widely used, and sometimes abused, term that represents 'User Experience', typically across a population - for instance all the iOS users.
There are various ways to measure UX. For instance, aspects of UX can be measured digitally, for instance using web and/or mobile analytics, or inferred, for instance if an application crashes or is killed by the operating system the UX is unlikely to be positive for the user(s) who were affected. Other aspects of UX may be inferred from what people write about the app or software they're using. However what people write and what they think often differs and may conflict, so we need ways to interpret the feedback to use it appropriately and usefully. And finally for this section, what people say, do, and their facial expressions may provide further clues about their UX.
The quality of UX may significantly affect revenues for some organisations, and therefore finding ways to measure and test UX may be vital for the long terms health of the organisation and those who work for it. Bad UX is Bad Business; and conversely Good UX is Good Business.
This workshop describes the landscape of UX, including ways to measure UX and test aspects of UX. It is based on ongoing research, including interviews with various organisations, and leaders in the testing community in various countries.
-
keyboard_arrow_down
[More] Reliable and Trustworthy Automated GUI Tests
45 Mins
Talk
Intermediate
One of the perennial problems with automated tests is the amount of work many need to keep them working as the underlying application changes. This is particularly true of poorly designed and/or poorly implemented automated tests which can be brittle and prone to break when virtually anything changes in the system under test, the test conditions, or the environment.
This session will help you learn or refresh your understanding of how automated GUI tests work, and then identify and discover various approaches to improving the quality of these tests, their interface(s) with the system being tested, and how to design and implement alternatives to existing 'flaky' tests. The concepts apply to both web and mobile app testing.
The session will be based on my experiences of working with people who write and maintain automated tests, and on my own attempts to write trustworthy tests.
-
keyboard_arrow_down
Agile Mobile Testing
90 Mins
Workshop
Intermediate
Find out what testing works for your mobile app.
Agile Software Development means we want to maximise progress while minimising waste. Delays cause waste, for instance wasted time and efforts; ineffective work causes waste; poor quality causes waste; and bugs cause waste and delay progress, etc.
Mobile apps and the mobile app ecosystem help determine what sorts of testing will be more valuable for the project. This workshop introduces various key concepts and factors related to testing mobile apps effectively. You will have the opportunity to practice testing mobile apps during the workshop to help reinforce your learning and discovery.
We will cover both interactive and automated testing of mobile apps, and find ways to reduce the Time To Useful Feedback (TTUF) so the project team can make more progress while reducing project waste. We will also cover various ways to gather more and better information about the qualities of our mobile codebase and of the quality of the apps-in-use.
Bring your mobile apps and mobile devices and be prepared to get involved in testing!
-
keyboard_arrow_down
If not now, when? If not you, who?
60 Mins
Keynote
Intermediate
Improving Education and Learning in India and beyond
We are fortunate, software and technology provides us with choices, a relatively good income, and enables us to learn and discover virtually any topic or subject.
Agile software development practices enable us to do a better job, sooner than many other approaches to creating software. We can try things without waiting for 'everything' to be defined, adapt based on experiences and feedback, and get people involved who the software touches.
Two years ago, I visited some schools in Western Kenya to see some classrooms I'd sponsored. That visit and subsequent visits to Southern India, spurred me to apply lean & agile software development practices, to improving teaching, learning and education through appropriate technologies. Each pilot project needed to be adapted to the context of the schools, environment and region in order to function.
We have an opportunity, and perhaps an obligation, to apply our skills and expertise to helping others. In doing so, paradoxically we gain more than we realise. Sustainability and ultimately success will come from peers in India who choose to get involved, waiting for 'others' doesn't cut-it.
-
No more submissions exist.
-
No more submissions exist.