We've built Labstep to be an efficient and fast online tool for research. However, sometimes we might get a bug or two. Here is how we handle it when this happens!
"How are bugs detected and reported?"
There are a number of ways we capture bugs:
Ideally, in our staging environment, the Quality Assurance team captures them during testing.
There is an automation layer that detects exceptions on the frontend and the backend and creates bug tickets for the development team.
In very rare cases and for some performance problems we rely on users to report problems. The users can use the Intercom widget in the browser to communicate with our Customer Service team.
Once a bug ticket is created, it is triaged and assigned priority. We ask ourselves a number of questions:
Is it a security problem? If so, high priority is given.
How many users are impacted? Is it a platform-wide issue or is it local to some part of functionality.
What’s the severity? Can the bug be worked around or does it cause a complete crash.
Development cost – how quickly can we develop a fix.
The scores for these various questions are then added and the fix is placed in the work queue for the development team.
"How are the bugs fixed and deployed?"
Once a bug fix is ready, the development team places it on a so called “hotfix” branch. This branch is then deployed into our staging environment and the Quality Assurance team confirms the resolution, or requests further revision.
Once the QA team is happy, the bug hotfix is merged into our master branch and deployed to production. The QA team then again confirms that the problem has been fixed, this time in the production environment.
Hotfixes happen daily, outside of the usual release cycle, and ensure that critical issues are resolved immediately.
"How often do you release?"
Our usual release cycle is about 1-2 weeks.
"What’s the release process like?"
The features coming into the developer team have 2 primary inputs:
Strategic direction, where we want to take Labstep for the business to succeed.
User feedback from surveys, user conversations and formal interviews.
Once a number of features are carved up they are developed and passed to the Quality Assurance team. We have a number of isolated environments where experimental (and potentially dangerous) features can be tested without impacting users’ data:
A “staging” environment is used for usual activities.
A “testing” environment is used for more experimental features and migrations.
When the QA team is happy with the release, it is then put into the production environment for all the users.
"When do you deploy new versions?"
Labstep changes very quickly, week-on-week, and we try to ship our improvements out to the users as soon as possible. However, we know that our users are busy people and don’t want disruption during their daily life.
For this reason, the deployments happen outside of working hours. Either late at night during weekdays, 10pm UK time, or during weekends. This minimises the release-related disruptions to the day-to-day work in the lab.
Each deployment is proceeded by a service notification that gets sent to all the users on the platform, to ensure that they're not surprised while actively doing work.
"What’s the uptime/downtime for Labstep over the last X days?"
For the last two weeks, August 1-14, Labstep had a 100% uptime. The AWS Elasticbeanstalk “Monitoring” panel does not allow us to look further back into the past.
Still need help?
Contact us here or start a conversation with a member of our team using our in-app chat.