Championing The Success of Women in Science, Technology, Engineering, Maths, and Medicine


(Guest post from Digital Science.)

To mark Ada Lovelace Day, the annual celebration that promotes women working in Science, Technology, Engineering, and Maths (STEM) Digital Science have issued a new report.

Championing The Success of Women in Science, Technology, Engineering, Maths, and Medicine” includes a collection of think pieces around current issues faced by women in STEM authored by industry leaders. The report explores areas of gender inequality and potential causes of this inequality, offering up a collection of solutions.

Women’s reluctance to step into leading roles, their tendency to suffer from “imposter syndrome” and their career breaks as a result of motherhood, are just some of the contributory factors holding them back, as well as the outdated, sexist attitudes they sometimes have to face in the workplace.

The report offers some solutions to fostering better environments for women including:

  • It is clear that proactive responses from the research community are needed in order to resolve these issues, creating a cultural change that will allow more women into management roles.
  • Mentors can help encourage women to become more confident in their own abilities and accept opportunities which open up to them.
  • Feedback from the academic community is also an important factor in measuring the rate and range of change.

Suw Charman-Anderson Founder, Ada Lovelace Day said:

“As someone who has worked for a decade to inspire women to pursue STEM careers, it’s great to see the effect that Lauren Kane and Alice Meadows’ call to action last year has had. This report is important because it’s so rare for women to see the impact of this kind of activism. Many of the contributors have valuable ideas and advice for how we build on the work that has already been done and how we expand to help even more women, and other minorities, in STEM.”

Contributions include:

  • Foreword: Suw Charman-Anderson, Founder of Ada Lovelace Day
  • What a Difference a Year Makes: Parity at the Podium Revisited: Lauren Kane and Alice Meadows
  • To Accelerate Pace of Change, Women Need to Own Revenue:Tracey L. Armstrong
  • Creating Change For Women in Science, Technology, Engineering, Maths, and Medicine: Rhianna Goozee
  • The Money Shows it is a Man’s World – How Can We Reduce the Difference? Michael Head
  • Shut Up, Sit Back, and Listen! Bastian Greshake Tzovaras
  • Women in Astronomy & Computer Science – There’s Still Work To Do: Kimberly Kowal Arcand
  • Blind Spots: Seeing Sexism in STEM: Buddhini Samarasinghe
  • Does Research Evaluation in the Sciences Have a Gender Problem? What Do Altmetrics Tell Us? Stacy Konkiel

Here at Digital Science we are committed to ensuring that the research community is fully connected and empowered. Inclusivity is key to this – tapping into the expertise of everyone, regardless of gender, race or sexual orientation. In our commitment to improving science, we hope this report will offer up some support and solutions to some of those issues.

The report is openly available to download under a CC-BY license on Figshare. You can share your thoughts on #ChampioningWISreport.



Five year, $10 million+ deal with robotic cloud lab will double Ginkgo’s foundry output; increase scale of its synthetic biology innovation

BOSTON & MENLO PARK, CALIF., Oct. 3, 2017  -- Ginkgo Bioworks, the organism company, today announced a collaboration with robotic cloud laboratory Transcriptic. Through a five year agreement valued at more than $10 million, Ginkgo will incorporate Transcriptic’s cutting-edge robotic automation software into its Boston-based foundries. The combined technologies will bolster Ginkgo’s automation capabilities and engineering capacity, strengthening its platform and cementing a foundation for continued growth.

Ginkgo’s foundries currently rely on software and robotics to automate work on organism design across the flavor and fragrance, enzyme and agricultural industries. This collaboration builds upon Ginkgo’s existing platform, bringing Transcriptic’s unique expertise in flexible lab automation to supercharge designers’ efficiency. Transcriptic’s software integration will automate new parts of Ginkgo’s experimental design process, adding greater flexibility and remote monitoring capabilities. With this collaboration, Ginkgo will double its current monthly foundry output, increase the speed and efficiency of product delivery to existing customers, and establish a flexible platform that can better scale to meet the needs of the ever-expanding biotech industry.

“Transcriptic’s ability to translate organism designers’ vision into reality via lab automation is unparalleled, and brings an unprecedented scale to our organism foundry,” said Barry Canton, Ginkgo Bioworks co-founder. “Transcriptic shares our vision of leveraging the power of technology to transform lab experiments for more efficiency and scale. Automating the right processes allows our team to spend more time on what they do best: thoughtful design, analysis and delivery, so that together we can meet the continued demand from industries rethinking manufacturing with biology.”

This licensing agreement is the first of its kind for Transcriptic, and a significant expansion of its current business model. Transcriptic engineers will be on-premise at Ginkgo, working side by side with organism designers to improve efficiencies within Ginkgo’s foundries and further enhance the software together.

“The Transcriptic platform automates laborious lab processes to make research faster, less expensive and increasingly scalable,” said Yvonne Linney, Transcriptic CEO. “We look forward to bringing our technology to Ginkgo—a company at the forefront of organism design—in a completely new way, and build the future of biotech together.”

This collaboration will be foundational to a number of Ginkgo’s 2017 ventures and partnerships, including the work on a new company launched in partnership with Bayer. Announced last month along with an initial $100M Series A investment, the new company will carry out its strain engineering operations from Ginkgo’s foundry, and will focus on microbial design to aid nitrogen fixation in certain plants. Earlier this year, Ginkgo acquired leading synthetic DNA provider Gen9 to bring its technology for pathway-length synthesis in house, and today announced the purchase of one billion base pairs of DNA from longtime partner Twist Bioscience— the largest single purchase order in history. With the market share of synthetic DNA, Gen9’s DNA synthesis ability and Transcriptic’s software fully deployed on-site, Ginkgo continues its mission to build the industry’s strongest and most robust platform for organism engineering.

About Ginkgo Bioworks
Headquartered in Boston, Ginkgo Bioworks uses the most advanced technology on the planet – biology - to grow products instead of manufacture them. The company's technology platform is bringing biotechnology into consumer goods markets, enabling fragrance, cosmetic, nutrition, and food companies to make better products. For more information, visit

Webinar Summary: How the Internet of Things is Disrupting Science

(Guest post from Digital Science.)

As part of our thought leadership webinar series, our latest broadcast discussed "How the Internet of Things is Disrupting Science."

We covered a diverse range of topics including:

  • What the Internet of Things (IoT) is and why it’s useful to know
  • Why the IoT is foundational for science’s full potential
  • What the key challenges are facing industry
  • Predecessors to IoT and current technology approaches
  • IoT case studies from Transcriptic and TetraScience – lab tools leading the way

Our panel included:

  • Yvonne Linney, CEO, Transcriptic
  • Umesh Katpally, Novartis Institutes for Biomedical Research
  • Alok Tayi, CEO and Co-founder, TetraScience
  • Laura Wheeler, Head of Digital Communications & Community Engagement, Digital Science

The first speaker to present, Umesh Katpally, began by giving a succinct description of the Internet of Things (IoT): A network of devices that capture and transmit data to people, software, and each other.

For people unfamiliar with IoT, a relatable example is a household connected by a string of electronic devices that are controlled by a central device like a smartphone. In the consumer market, products like Alexa are transforming households around the world by allowing users to control their homes using their voice. Umesh invited listeners to imagine a professional laboratory being controlled in the same way.

“Some of the key challenges the pharmaceutical and the bio pharmaceutical industries are facing today is the rising cost of medicines. This one challenge speaks to many underlying root causes. Rising costs of medicine include the cost of bringing medicines to market which on average totals upwards of two billion dollars.”

Key problems in pharma/biotech:
- Rising cost of medicine
- Data reproducibility
- Slow cycle time
- Compliance

Umesh then went on to comment on the key challenges in research.

“Most of the time in a typical lab, your instruments don’t speak to each other. What ultimately happens is that scientists spend a lot of time on data management. On average, one hour of experimentation is almost equal to one hour of data management!”

Umesh then talked about a well-known problem in the lab – reproducibility of data. Surprisingly enough, 50% of results published are not reproducible. This costs the US about $28 billion a year. Umesh made an important reference to Artificial Intelligence (AI) and commented that having quality data can ensure AI and Machine learning can really thrive. Novartis is exploring a number of tools to better this process and prevent data silos. Umesh ended his presentation by talking through the business value IoT can provide companies and laboratories.

Potential IoT.png

Speaking next was Alok Tayi, CEO and Co-founder of TetraScience. Alok started by stating the problems facing the bio-pharma industries. Large sums of money are being invested in Research & Development, yet ROI is poor. What’s more, R&D returns are consistently declining.

“Just last year about $80 billion was spent on global pharmaceutical R&D, yet only 22 new medicines came to market! This is the context in which we are looking to make an impact. From our experience as a team, what we’ve seen across the industry is that one of the fundamental underpinnings of the challenge is that there exists an ecosystem inside the laboratory.”

Unfortunately, a labs component parts don’t communicate well with each other.


Alok then made an important series of observations. Within a working laboratory, multiple scientists are required to manage data in a number of different formats – from paper notebooks to USBs, data still needs manual management. This requires time and energy and leads to a number of data silos which creates a myriad of problems.

On a more practical level, individuals also face the daily problems of running lab equipment.


The Internet of Things helps users connect to devices and data entry points.

“The Internet of Things really creates value when one combines the connectivity with a workflow innovation and a business model that is relevant to the end user.”

TetraScience delivers value in three core areas: Project execution, enterprise data and the scientific method.


Alok then gave a summary of how TetraScience have tailored IoT for science by connecting instruments with a cloud based data collection control point – all controllable through real time dashboards on your computer.


Next, Yvonne Linney, CEO of Transcriptic, delved into the ways in which Transcriptic is utilizing the IoT to create state of the art robotic laboratories that can be controlled digitally through the cloud.

“IoT enables a closed loop experimentation system where parameters can be continuously optimized based on the analysis data from previous rounds of experimentation. It’s critical to really develop the advances in experimental design."
@transcriptic has built one of the most futuristic bioscreening labs using IoT technologies

@transcriptic has built one of the most futuristic bioscreening labs using IoT technologies

Yvonne then explained how Transcriptic’s robotic work cells operate and mentioned the many other functions of Transcriptic, running through the ways in which data produced by their labs is processed, removing human ambiguity in the interpretation of common lab protocols which is key to reproducibility and consistency in running experiments.

How Transcriptic uses IoT to solve key issues:
- Remote access
- Transparency
- Precise instructions
- Data sharing

After listing the key issues that Transcriptic solves, Yvonne commented on Transcriptic’s vision for IoT, which is turning biology into an information technology. This will advance scientific research by driving down costs and facilitating collaboration and data sharing.

The webinar ended with a lively Q&A debate spearheaded by Laura Wheeler where insightful questions invoked great responses! If you would like to share your opinions about topics mentioned in the webinar, voice them using #DSwebinar. Follow @digitalsci for future webinars, podcasts and much more.

Transcriptic Webinar: "How the Internet of Things is Disrupting Science"

Tune into our upcoming thought leadership webinar, “How the Internet of Things is Disrupting Science” on Thursday 24th August at 4pm UK / 11am EST / 8am PST. Even if you can't attend live, a recording of the webinar will be sent to all registrants afterward.

In this webinar, we will look at how the Internet of Things (IoT) is changing and driving digital transformation in science, what tools are available, and what the future landscape will look like.

You’ll learn about:

  • What the Internet of Things (IoT) is and why it’s useful to know!
  • Why the IoT is foundational for science’s full potential
  • What the key challenges are facing the industry
  • Predecessors to IoT and current technology approaches
  • IoT case studies from Transcriptic and TetraScience – lab tools leading the way


Top thought leaders speaking on the webinar:

Yvonne Linney, CEO, Transcriptic

Prior to joining Transcriptic two years ago, Yvonne was an executive at industry-leading companies in the life sciences and diagnostics industries. She was a key member of the senior leadership team at Agilent Technologies where she was a VP General Management in the Life Science Group. She also collaborated with the founding members of the Human Genome Project during her time with Amersham International (now GE Healthcare). Yvonne holds a BS in Microbiology and Virology from Warwick University, UK, and a PhD in Genetics from Leicester University, UK.

Umesh Katpally, Novartis Institutes for Biomedical Research, an R&D arm of Novartis

Umesh is interested in enabling better and more efficient research with the use of new and evolving technologies such as IoT. He is currently working with a team to develop a long term strategy for Lab Informatics and automation, focused on making all research data machine learnable. The aim is to help with integrating and mining data sets from various stages of the drug development pipeline.

Alok Tayi, CEO and Co-founder, TetraScience

Prior to TetraScience, Alok was a post-doctoral fellow in George Whitesides Lab at Harvard University. He also co-founded the open innovation platform, PreScouter. Alok has 15 years of research experience and has published numerous high-impact papers in journals like Nature and Nature Chemistry. He completed his B.S. and PhD in Materials Science at Cornell University and Northwestern University, respectively.

Host: Laura Wheeler, Digital Science
Laura Wheeler is the Head of Digital Communications for Digital Science. Having studied Biochemistry, Laura left the lab for communication roles at the BBC and at Nature Publishing Group and now heads up digital comms at Digital Science where she is always busy helping to build and deliver content to communities in the science space.

Biomarker Quantitation In The Cloud

The Transcriptic robotic cloud lab provides a great advantage for large-scale biomarker assays, delivering consistent sample to sample processing with robotic reliability, all on demand when you need it. We’ve enabled both large pharma and small biotech to quantify DNA, RNA and proteins from hundreds or thousands of samples in an automated and programmatically driven way.

In this post I want to focus on our commitment to providing access to modern tools in biomarker quantitation, and how integrating these tools with the Transcriptic robotic cloud lab not only increases accessibility to modern tools but also leverages the repeatability and reliability of an entire experimental workflow from treatment to analysis.

In the background, the robotics, software and science teams, at Transcriptic, have been collaborating on bringing the Mesoscale Discovery Sector S 600 online to all Transcriptic users. Today I’m happy to announce that we’ve made huge strides in enabling protein quantitation in high-throughput for our users and this is a big step in expanding that capability.

Mesoscale Discovery (MSD) Sector S 600 integrated with the Transcriptic robotic cloud lab

Mesoscale Discovery (MSD) Sector S 600 integrated with the Transcriptic robotic cloud lab

The Sector S 600 performs a function similar to traditional ELISA though instead leverages electrochemiluminescence (ECL) as a detection technique. There are a number of particular benefits to the Sector S 600 but one of the most compelling is the data to sample efficiency. This makes the Sector S 600 invaluable for those liquid or tissue biopsies that are low in quantity, such as plasma.

To achieve high levels of analyte quantitation per unit sample, the Sector S 600 uses a proprietary plate technology that consists of microplate wells modified with 10 capture antibody spots in each well. This spot array of capture antibodies enables up to 10-plex capture of target analytes all from the same sample volume in a single well. Once the target analytes have been captured on the spots the plate is washed and detection antibodies are added similar to an ELISA workflow. At this point, the device will sequentially read each spot and measure the resulting electrochemiluminescence (ECL).

In addition to the high data to sample efficiency, the MSD Sector S 600 also touts a 6-log dynamic range enabled by the ECL labelling technology, rendering even extremely low concentration analytes in your samples, detectable. Pre-validated panels in MSD’s V-plex line provide panels of 10 or more targets in human, mouse, or rat samples in the following areas:

  • Proinflammatory response
  • Neuroinflammation and Alzheimer’s
  • Cytokines
  • Chemokines

MSD also provides customizable plates in their U-plex line, where we can onboard your custom panel of targets for your biomarker quantitation workflow. Launching runs that use the Sector S600  from the Transcriptic web application is just like any other run, and the team produced a great interactive UI for quickly assessing the multiplexed panel data.

Transcriptic is happy to make the Mesoscale Discovery Sector S 600 immunoassay device available in the cloud.  It is fully integrated with our robotic cloud lab platform delivering seamless sample to data from tissue biopsies through to multiplexed protein quantitation outputs. The Sector S 600 sports a 6-log dynamic range perfect for low concentration analytes and up to 10plex analyte detection per sample for efficient data to sample material yields.

Protein quantitation with Mesoscale Discovery Sector S 600

  • Low sample quantity demands with up to 10-plex detection of targets
  • High-throughput in either 96 or 384 well plates
  • High sensitivity with a 6-log dynamic range
  • Multiple panels available including inflammation and Alzheimer’s relevant targets across human, mouse, and rat

Get started measuring multiplexed protein biomarkers today, by getting in touch with us or learn more in the technical notes

New Subscription Tiers


The cloud lab model of Transcriptic ensures that the lab automation is used to its highest efficiency, this is enabled by having the robotic infrastructure as a shared resource consumed by all users. A result of this is that once runs are submitted they enter into a queue until enough robotic capacity is available for execution.


With the introduction of tiers, organizations can now set the priority with which they wish their runs to have on the Transcriptic platform. Specific priorities broadly correlate to an average queue time so if you need your results extremely quickly you should opt for a higher priority tier. If your discovery pipeline can accommodate some delay you can opt for a lower priority tier.  At times when you have runs that need a faster turn around than your standard runs, you can opt to jump a tier for that month and go back to your standard tier the next month.


Tier Estimated Queue Time Monthly Annual plan, prepaid
1 1 week+ $600.00 $600.00/month
2 3-5 days $1,125.00 $1,068.75/month
3 1-2 days $2,250.00 $2,025.00/month
4 24 hours $4,500.00 $4,050.00/month
5† 8 hours $8,500.00 Get in touch
Private Workcell‡ <8 hours $25,000.00 Get in touch

† Yearly expected volume >$1M, ‡ Yearly expected volume >$3M, *Initial pricing subject to change.


Get in touch with our sales team at [email protected] to get started with the tier right for your organization.


Q: How do I upgrade my tier?
A: Get in touch with your organization's admin who can upgrade your tier straight from the web application.

Q: What priority does my run have if I submit it then change my tier?
A: Runs receive the priority that the organization had during run submission.

Q: Is there a free tier?
A: Yes there is a free tier however, it does not come with the ability to submit runs.

Q: What is the benefit to paying annually?
A: Paying annually secures a discount on the tier fee along with simplified billing.

Q: Why do the private workcell and tier 5 have minimum spend requirements?
A: These two options are for users who anticipate intense usage of the robotic hardware. To ensure the platform is accessible to meet your requirements Transcriptic also requires commitment from the user.

Q: What happens if my run waits longer than the expected queue time?
A: Due the dynamic nature of the queue Transcriptic cannot guarantee queue times and can only quote historical performance of the tier to provide estimates.

Q: Can I only pay for the months I am running experiments?
A: If you are on a month to month subscription, please contact [email protected] to downgrade to an inactive tier. For monthly subscription billing with an annual commitment, it is possible to downgrade before the next year starts in your commitment cycle, however monthly billing will continue for the initial year commited to. 

QuikChange Lightning on Transcriptic

Part of growing Transcriptic means making industry leading protocols and reagents accessible to our users. For this reason I’m happy to announce that on November 4 2016 Agilent Technologies’ QuikChange Lightning, site-directed mutagenesis kit will be available in the Transcriptic protocol browser.

Earlier in the year, for the first time ever Transcriptic was used in a peer-reviewed study for the generation of a large number of mutants. This enabled the team from UC Davis to explore a parametric space they had previously not had access to. With this in mind we thought enabling the exploration of protein sequence-space was a great application of a programmable lab, as it marries protein engineering computational techniques with wet-lab experimentation.

We approached Agilent who were highly responsive in pursuing the implementation of their QuikChange products on Transcriptic. The first of which that we decided to tackle was QuikChange Lightning for single site-directed mutagenesis.

QuikChange is a very popular suite of kits that provide a highly efficient non-PCR method for reliable site-directed mutagenesis. The QuikChange kits make use of a linear amplification strategy with only the parental strand serving as the DNA template. The kits also feature highly efficient proteins for mutant generation and reaction clean up, which all lead up to a robust and simple user experience.

Schematic cartoon of the QuikChange mutagenesis process.

Schematic cartoon of the QuikChange mutagenesis process.

The implementation on Transcriptic is designed to make it exceptionally easy to generate anywhere from single mutants up to large numbers with minimal hands-on time from the user. A user simply launches the protocol, supplies a .csv list of mutagenic oligonucleotides and the source DNA (typically a plasmid) to mutate.

Once the run has been submitted, the Transcriptic platform takes care of ordering the oligonucleotides, performing the QuikChange reaction, transforming competent bacteria, picking colonies and finally growing the picked colonies in deep well liquid cultures.

A section of the resulting plate from transforming competent bacteria with the QuikChange reaction products. Colonies circled in red were picked by the platform to be grown in liquid culture.

A section of the resulting plate from transforming competent bacteria with the QuikChange reaction products. Colonies circled in red were picked by the platform to be grown in liquid culture.

When implementing partner reagents we have to ensure we can achieve the same data quality as our users are typically used to back at the bench. For QuikChange we replicated the performance of the kit by attempting a two adjacent base mutation from CA to GG. The protocol produced a large number of colonies and upon screening with sanger sequencing 75% of screened colonies were positive for the mutation.

a) Top: Sequence from the source DNA. Below: Sequences from 4 picked colonies. 3 out of the 4 screened colonies were found to be positive for the target mutation. b) Clean sequences traces shown for 3 colonies.

a) Top: Sequence from the source DNA. Below: Sequences from 4 picked colonies. 3 out of the 4 screened colonies were found to be positive for the target mutation. b) Clean sequences traces shown for 3 colonies.

Over the past 4 weeks, we’ve had the pleasure to talk about QuikChange lightning on Transcriptic at SynBioBeta San Francisco 2016, LRIG Boston 2016 and at iGEM 2016. All fantastic venues for sharing our work and with some very excited people in attendance.

We hope you’ll see how easy it is to start exploring a vast protein sequence-space with Transcriptic and QuikChange Lightning, you can learn more here




Introducing BSL-2 Workcell Instances

Since the start of Transcriptic users have only been able to conduct Biosafety Level 1 (BSL-1) experiments on our automated cloud infrastructure. We believed that executing well at this lower stringency threshold meant that we could deliver a great service for our users whilst still ensuring we could rapidly develop our capabilities.

A few months ago we completed the construction of 2 additional workcells, working closely with a select group of users we have been running automated BSL-2 experiments on our cloud infrastructure to facilitate some amazing science. With the completion of these new workcells and some great users we are really excited to be able to offer BSL-2 workcell instances to all Transcriptic users from today.

With access to automated, cloud BSL-2 environments companies using Transcriptic are tackling some of the hardest problems in discovery biology in completely new ways. As an example biological tissue can now be processed and analyzed at Transcriptic via techniques such as ELISA, qPCR, and RNA-seq in a executed in a completely programmatic and automated way where data are made available to our users through the API.  BSL-2 instances have also enabled new applications including viral engineering, mammalian cell based assays and BSL-2 bacterial engineering possible entirely from a command line interface. 

Get started with BSL-2 instances today

To start using BSL-2 environments for your work simply create a new project and upgrade it to BSL-2 status. Now, any run submitted to this project will be executed on a Transcriptic BSL-2 instance.

For more information on BSL-X environments at Transcriptic check out the documentation

Autoprotocol Summit 2016 - Pushing reproducibility and usability

We recently hosted the second annual Autoprotocol Summit at our new San Francisco office. With the gorgeous city of San Francisco as a backdrop, we took the day to think about how Autoprotocol has changed since the first Autoprotocol Summit in 2016, and how we can make it even better.

Joined by developers and vendors from a number of companies and organizations, we met to celebrate the achievements of the last six months and focus on making Autoprotocol more usable and reproducible. Here's a report from the trenches!

We began with a recap of the progress made since the last summit.

Highlights included:

  1. An increase in Autoprotocol's scientific coverage with 15 new Autoprotocol Standard Changes (ASCs) allowing for new instructions like magnetic transfer for DNA, RNA, protein, and cellular bead based purification to purification by gel electrophoresis.

  2. Improvements in developer tools for writing and analyzing Autoprotocol with the release of Autoprotocol-Utilities, updates to Autoprotocol-Python, and to the data analysis components of the Transcriptic Python Library (TxPy).

  3. The ability to increase reproducibility with the addition of time constraints as a top level feature in Autoprotocol.

Overall, it's never been easier to write and use Autoprotocol.

Next we took a look at how Autoprotocol is being used in the wild. We heard about all of the projects using Autoprotocol inside and outside of Transcriptic including autoprotocol-ruby, assaytools, and “How to make any protein you want for $360”.

Developer's perspective from Brian Naughton

Developer's perspective from Brian Naughton

Autoprotocol developer Brian Naughton presented his experiences using and contributing to Autoprotocol. Brian felt that Autoprotocol's underlying JSON and Python infrastructure was a strong choice for standard adoption because it was very accessible to scientists. Brian, who uses Autoprotocol mainly in conjunction with Transcriptic, also described how the development of TxPy has made it much easier to launch his experiments. Finally Brian concluded with a brief mention of his plans to look at generating sequences of experiments with workflow tools to make chained experiments a reality.

In the Q&A period, Brian touched on the need for greater transparency in how Autoprotocol is translated to a physical experiment (especially with regards to inventory) as well as the need for more tools to help abstract away some lower-level decisions which scientists may be less interested in (like liquid-handling parameters).

The Need for Platform Independence by Conner Warnock

The Need for Platform Independence by Conner Warnock

Next, Connor Warnock from Notable Labs brought a vendor's perspective to the day. Connor shared a common pain point faced by many automation startups: the lack of standardization around devices and their protocols. His presentation focused on the possibility for Autoprotocol to become a universal common interface and compared the current stage of Autoprotocol to the early days of HTTP, where the long-term payoff is clear, but more immediate payoff is required for driving adoption. As a part of becoming a better layer for lab automation, there was substantial discussion around the possibility of broadening the scope of Autoprotocol to cover open-sourcing Lab Inventory Management Systems (LIMS) as well as open-sourcing device-driver wrappers.

After the two presentations, everyone headed to the Goals session excited to move Autoprotocol forward. After a short icebreaker, we went into brainstorming mode to discuss the problems which are confronted by Autoprotocol's users and potential users. There was a lively discussion with potential users ranging widely from pharmaceuticals to developing countries to even the FBI. The brainstorming led to three key areas for future Autoprotocol growth:

  1. Visualization and Intent

  2. Reproducibility

  3. Community Adoption

With the topics to be tackled agreed upon, we broke for lunch and started informal discussions.

Following lunch, we looked into the possibility of improving the ASC contribution process. There was general consensus that the current process should be made more transparent and explicit. As one action item moving forwards, the public discussion around submitting new ASCs will begin much earlier in the developers forums, and interested parties should join in on the contribution process. There was also substantial discussion on changing the underlying format of Autoprotocol to more naturally support the communication of experimental intent which would make it easier and more natural for scientists to use Autoprotocol for communicating scientific protocols.

Finally, we broke into small groups to tackle the three key areas of Visualization, Reproducibility, and Community Adoption brought up earlier. The room came up with a lot of great ideas and rapid prototypes. To highlight a few, there were mocks of better protocol visualization tools, a drag-and-drop system for creating Autoprotocol, and the groundwork for more comprehensive standards for reagents and data. Be on the lookout for more specific implementations to come!

Thank you all who attended Autoprotocol Summit 2016. None of the growth in Autoprotocol would be possible without the enthusiasm and care of the developer community. Shoutouts go to external developers such as Brian Naughton and Transon Nguyen & Connor Warnick from Notable Labs. Thanks as well to Ben Miles and Yang Choo for leading and organizing the sessions and Taylor Murphy for handling the logistics. Thanks also for the support of the Autoprotocol Curators including Tali Herzka from Verily and Vanessa Biggers, Jeremy Apthorp & Peter Lee from Transcriptic. And of course, a big thank you to the Autoprotocol community. We are looking forward to another great year of Autoprotocol ahead.


Transcriptic Launch

Easier development of packages

The Transcriptic CLI tool was recently updated to make protocol package development faster. The new launch command quickly allows you to preview the UI generated by the package as well as the autoprotocol JSON.


Transcriptic packages enable the launching of protocols via the UI. These are really useful for sharing protocols within your team if members of your team don't program their own protocols. They can easily launch runs with custom parameters all from a user interface.

The process for getting packages on to Transcriptic

Currently packages are uploaded to Transcriptic via a release process to enable version control, however this process can be some what slow when developing a package. Often you would want to quickly make changes and see how they effect the user interface generated, or have GUI access to your inventory when launching a protocol without going through the release process. The new launch command should make this easier.

Transcriptic launch

Executing the new launch command will open up a web browser and show the UI generated from the manifest.json file in the package. Here you can interact with the user interface like you would for any other package, by filling in parameters and accessing containers in your inventory via the inventory browser.

During package development you can launch your protocol as follows:

transcriptic launch protocolName --project "Project Name"

This then opens up your default web browser and will show you the run configuration page that is constructed from the manifest.json.

Waiting for inputs to be configured.......
Generating Autoprotocol....

Interact with the generated UI as you would with any other package.

Now head back to the command line to interact with the Autoprotocol JSON

Now head back to the command line to interact with the Autoprotocol JSON

From here you can fill out the parameters for the experiment as well as populating fields with samples from your inventory. When you 'Generate protocol input', the run is not submitted to Transcriptic. Instead, the JSON generated by the protocol is then available at stdout back in the command line. Where it can be piped to transcriptic submit or to a log file locally.

  "refs": {
    "tube": {
      "new": "micro-1.5",
      "store": {
        "where": "cold_4"
  "instructions": [
      "to": [
          "volume": "2.0:microliter",
          "well": "tube/0"
      "op": "provision",
      "resource_id": "rs18nw6ta6d5bn"

This means it is very easy to inspect the autoprotocol generated with a package or quickly go through iterations of the fields available in the user interface speeding up development time.

Programming Transcriptic

So you want to program a biology lab? You're in the right place.

Today we are going to instruct a completely automated robotic cloud lab to grab a genetically modified strain of bacteria from a library of common reagents, innoculate some bacterial growth media and finally watch how that culture grows over 8 hours by seeing how the bacteria scatter 600nm light.

Let's get started.

After you sign up for a Transcriptic account you need to install some dependencies, we'll be working with Python today so pip is your friend.

First let's install the Transcriptic CLI tool.

pip install transcriptic

Next we'll be writing Autoprotocol the open standard for experimental specification so we need a tool to help us do that.

pip install autoprotocol

OK we're all set.

Let's run the Transcriptic CLI.

> transcriptic

Usage: transcriptic [OPTIONS] COMMAND [ARGS]...

  A command line tool for working with Transcriptic.

  --apiroot TEXT
  --config TEXT            Specify a configuration file.
  -o, --organization TEXT
  --help                   Show this message and exit.

  analyze         Analyze a block of Autoprotocol JSON.
  build-release   Compress the contents of the current...
  compile         Compile a protocol by passing it a config...
  create-package  Create a new empty protocol package
  create-project  Create a new empty project.
  delete-package  Delete an existing protocol package
  delete-project  Delete an existing project.
  format          Check Autoprotocol format of manifest.json.
  init            Initialize a directory with a manifest.json...
  login           Authenticate to your Transcriptic account.
  packages        List packages in your organization.
  preview         Preview the Autoprotocol output of protocol...
  projects        List the projects in your organization
  protocols       List protocols within your manifest.
  resources       Search catalog of provisionable resources
  submit          Submit your run to the project specified.
  summarize       Summarize Autoprotocol as a list of plain...
  upload-release  Upload a release archive to a package.

First we need to login to our Transcriptic account and specify our organization.

> transcriptic login

Email: [email protected]
You belong to 3 organizations:
  Sanger Lab (sanger-lab)
  Franklin Lab (franklin_lab)
  Swift on Pharma (swiftpharma)
Which would you like to login as [sanger-lab]? swiftpharma
Logged in as [email protected] (swiftpharma)

Great we're logged in, now we can start writing our protocols. Let's create a file to contain our commands to produce the Autoprotocol description of a growth curve. You could also do this interactively in a Python REPL.

> touch

First we'll add the import statements, autoprotocol is needed to provide the functions to generate Autoprotocol JSON. And the JSON package is required for some utility methods to handle parsing JSON.


This script produces autoprotocol to execute a growth curve on Transcriptic
from autoprotocol import *
import json

Next let's instantiate a protocol object that all of our instructions are attached to.

p = Protocol()

Now we are going to begin defining the references. References describe containers used in the protocol such as plates and tubes. We are just going to describe 2 containers 1 for the bacteria and one for the plate that will be used in the plate reader to follow along the growth.

A reference takes 5 arguments, name, id, cont_type, storage and discard. id is required if referencing a container that already exists, if instantiating a new container an id will automatically be assigned by Transcriptic upon run submission. cont_type is the type of container, below we are specifying a flat bottomed 96 well plate and a 1.5 mL microcentrifuge tube. storage is the temperature at which you require the sample to be stored when not directly in use. Below the plat will be stored at 4C whenever it is not being used, the tube however will be discarded at the end of the run as discard is set to True.

growth_plate = p.ref("growth_plate", id=None, cont_type="96-flat", storage="cold_4", discard=None)

bacteria = p.ref("bacteria_tube", id=None, cont_type="micro-1.5", storage=None, discard=True)

Now that we have defined our containers we now want to fill them up. First of all we want to get some E. coli from the Transcriptic common reagent library. This can be done with the provision instruction and the resource_id for the material we need. Resource IDs can be found in the catalogue.

dh5a = "rs16pbj944fnny"
p.provision(dh5a, bacteria.well(0), "15:microliter")

Now let's fill the first column of that empty 96 well plate with some growth media. LB-broth should do the job nicely. The code below will dispense 175µL of LB-broth into each well in the first column.

p.dispense(growth_plate, "lb-broth-noAB", [{"column": 0, "volume": "175:microliter"}])

Now let's innoculate 4 of the 8 wells with E .coli using transfer.

test_wells = growth_plate.wells_from(0, 4, columnwise = True)
for dest in test_wells:
  p.transfer(bacteria.well(0), dest, "2:microliter")

In a interactive python session you can see what the test_wells``WellGroup looks like. Note that wells are 0 indexed and increment row wise.

>>> test_wells
  Well(Container(growth_plate), 0, None),
  Well(Container(growth_plate), 12, None),
  Well(Container(growth_plate), 24, None),
  Well(Container(growth_plate), 36, None)

That's the innoculation taken care of now let's create a loop that will incubate the culture for 30 minutes then take an absorbance measurement at 600nm.

# Set total growth time of growth curve
total_growth_time = Unit(8, "hour")
# Set the number of OD600 measurements taken over the time course.
number_of_measurements = 16

for i in xrange(0, number_of_measurements):
  p.cover(growth_plate) # put a lid on the plate
  p.incubate(growth_plate, "warm_37", duration=total_growth_time/number_of_measurements, shaking=True, co2=0)
  p.uncover(growth_plate) # take lid off of plate
  p.absorbance(growth_plate, measurement_wells, wavelength="600:nanometer", dataref="od600_%s" % i)

Now with all of these in a single python file we need to get some JSON that can be sent to the Transcriptic API.

For this we can use:

# Dump the Autoprotocol JSON.
my_experiment = json.dumps(p.as_dict(), indent=2)

Now from the command line we can run the python file which will print the JSON object to stdout. The stdout can be piped to the Transcriptic CLI.

Let's see how much this protocol run will cost with transcriptic analyze

python | transcriptic analyze
✓ Protocol analyzed
  67 instructions
  2 containers
  Total Cost: $25.59
  Workcell Time: $18.25
  Reagents & Consumables: $7.34

Let's find the project we want to submit to:

transcriptic projects


              PROJECT NAME              |               PROJECT ID
PCR                                     |             p18qua34567db
Directed Evolution                      |             p18qrn9745vfz
I'll come up with a name later          |             p18s63543jm9t3
Bad Blood... work up                    |             p18qupn345v99
Red... Fluorescent protein cloning      |             p18qrjd345u89

python | transcriptic submit -p p18qrjd345u89

Run created:

And that is the run submitted and the robots will execute it.

After the run completes the data can be downloaded from the web app as a CSV or via the API. I will cover data analysis in another post.

If you have any questions head to the forum to further reading check out the Transcriptic support site at

Transcriptic will be at SLAS2016 on booth #1423

So I’m going to let you in on a little secret... Transcriptic is going to be at SLAS 2016 and we’ve got some amazing things to share with you. SLAS is the Society for Laboratory Automation and Screening, and SLAS2016 is the annual conference perfect for those seeking automated tools to conduct their research, I think we’ll fit in pretty well.

We’re going to be there with a workshop, 4 posters and an amazing booth (#1423) that we would love for you to visit. Let’s kick off with a fan favorite.

We’re bringing CRISPR tools with us!

That’s right, Dr. Jim Culver, one of the wonderful team at Transcriptic will be presenting the new automated workflow for CRISPR constructs built on top of the Transcriptic robotic cloud lab. Jim’s poster is entitled Assembling CRISPR gRNA Constructs Using the Transcriptic Robotic Cloud Laboratory. Jim’s poster describes the assembly of gene editing CRISPR constructs produced remotely in Transcriptic’s cloud laboratory. Transcriptic was used to achieve a 100% success rate for the assembly and transformation of bacteria with each CRISPR construct design.


So if you’re using, or thinking about using CRISPR it’s a must see presentation. You can find Jim’s poster in the Automation & High Throughput Technologies category on Monday Jan 25, 1:00 PM - 3:00 PM, poster number #3037.

Repeatable site directed mutagenesis all from your laptop, without touching a single pipette.

Dr. Yin He from Transcriptic, equally as wonderful as Jim, is presenting her poster on Kunkel mutagenesis. The poster, entitled Use of the Transcriptic Robotic Cloud Lab for High-Throughput Site-Directed Mutagenesis describes the ease of performing automated site-directed mutagenesis at scale with internet and the Transcriptic robotic cloud lab. 32 mutants were designed and successfully transformed into bacterial hosts.

Dr He’s poster will be in the Automation & High Throughput Technologies category on Monday Jan 25, 1:00 PM - 3:00 PM, poster number #3028, find out how you can exploit the power of Kunkel mutagenesis at scale.

Application of Autoprotocol for the critical assessment of liquid handler reliability

Autoprotocol, the popular open standard for documenting protocols was used by Dr. Peter Lee in the study of a variety of liquid handlers. Peter’s poster, The Transcriptic On-Demand Robotic Cloud Lab Reliably Performs High Throughput qPCR, demonstrated the flexibility and critically, the benefits of designing experimental protocols adherent to the Autoprotocol data standard. An Autoprotocol adherent qPCR protocol was used to test the reliability of a selection of three commercially available liquid handlers, in a highly repeatable way.

Dr Lee’s poster can be found in the Automation & High Throughput Technologies category on Monday Jan 25, 1:00 PM - 3:00 PM, poster number #3039.

High throughput drug screening on PDCs via the internet with Transcriptic.

You weren’t expecting that heading were you! Alyssia Oh et al. from CPMC are presenting High Throughput Precision Drug Screening of Patient Derived Tumor Cells on Transcriptic's Cloud Based Laboratory. In this poster the authors describe their method for high throughput screening of anticancer drugs on patient derived cells (PDC) from patient derived xenografts (PDX). This amazing high throughput screening was conducted on Transcriptic’s robotic cloud lab. 

You can find the poster in the Screening and Assay development category on Monday Jan 25, 1:00 PM  - 3:00 PM, poster number #2169

Last but not least, our Transcriptic workshop

Dr Conny Scheitz will be running a workshop for the attendees of SLAS2016 on how to integrate Transcriptic in their work flow to reap the benefits of our robotic cloud lab. The workshop will be covering how to launch protocols from our standard library of protocols straight from your laptop. Be guided through Transcriptic at our workshop on January 26, at 9:30am in room 11B.

We really do have a fantastic selection of presentations on the future of biology I do hope you will come and say hello.

Welcoming the HIG to our robotic cloud lab

I feel the need… the need for speed.
— Maverick & Goose, TopGun, 1986, (five stars)

A quick update for all of our wonderful Transcriptic users. We recently added the Bionex HiG microplate centrifuge into to our robotic cloud lab expanding our centrifuge capabilities. The HiG is a great piece of kit and we’re very happy to have it.

So what does this mean? 

OK, well before this you could only spin up to 1000G, but now the limit has been increased to 4000G!! This is awesome, it means more efficient centrifugation for pelleting cells, minipreps, performing separations and anything else our clever users come up with.

For our users that launch protocols from the protocol browser this means better performance of your protocols and for our developers creating your own protocols and experiments with this means you have an increased parameter space to play with no changes to the data structure you are used to.

A quick Autoprotocol refresher for the `spin` operation.

I’m sure you don’t need it, but here it is just in case. Below is a small JSON snippet of Autoprotocol generated with some of the new variables for the parameters. Acceleration can be specified in terms of “g” or “meter/second^2” if you’re not a fan of the gravitational constant...

  "op": "spin",
  "object": plate,
  "acceleration": "4000:g",
  "duration": "120:second"

As always, for the full description of capabilities see the documentation and for questions jump on the community forum at

Till next time, happy experimenting.

Ben and the team at Transcriptic

Welcome Marie, Michael and Tom!

Transcriptic has grown fast over the last 8 months. In fact, we've doubled in size since we closed our $9M Series A back in January:

It's amazing how things that worked well back when we were eight people don't work at all now, and how much the feeling of the company has changed over time.  This growth meant that we had to start thinking about how to go from engineering a technology to building a company.

The key to this for us was having the right leadership in place. Over the last three months we've welcomed three important new leaders to the company, who have made a night-and-day difference for us.

Tom Driscoll is our new VP of Business Development. Tom’s been around the block and brings deep commercial experience as principal of his own consulting business, VP of Marketing and Business Development at Fluxion Biosciences, VP of Global Marketing at Molecular Devices, VP of the Bioimaging business at Becton Dickinson, and Director of Marketing at Clontech.

That's a pretty serious resume, and we're very happy to have him on board.

Michael Lin is our new VP of Operations. Before joining us, he was responsible for business continuity as part of the senior operations team at Invitae. Before Invitae, he ran the assay product group at Fluidigm and built it from negligible revenue to millions per year.  At Transcriptic, he has overall responsibility for lab operations. Michael is the principal guardian of our efficiency and quality metrics.

Marie Lee is our new VP of Applications. Her job is to make sure that customers are successful and that we're able to quickly and efficiently onboard new assays and methods.  She's the interface between operations, engineering and business development that ties everything together. Before Transcriptic, she was a Senior Applications Scientist at Fluidigm, where she played a very cross-functional role between sales, marketing and R&D. Before Fluidigm she was a Field Application Scientist at Thermo-Fisher. She's taught at USF as an Adjunct Professor, and did a postdoc at UCSF.

I'm very lucky to get to work with such a world-class team, and we're still growing fast. If you're interested, check out our open positions and get in touch!  Thanks to IA Ventures, Data Collective, Google Ventures, and all of our other investors for having the confidence to back us before any of this was together, and of course our customers who have entrusted their science to a lab they can't see or touch.  We're excited to be doing what we do.

Provisioning Commercial Reagents

Over the last few weeks we’ve made some big changes to our inventory reservation system: most notably the "reserve" button next to each reagent that allowed you to reserve an aliquot and make it available in your inventory has disappeared. In the interest of allowing the reservation of arbitrary amounts of resources instead of pre-designated aliquot sizes, we’ve switched from a system of reserving resources to provisioning them. This way, you only pay for the reagents you use and Transcriptic takes care of making sure reagents are as fresh as possible so you don’t have to. For most users, this transition doesn’t mean much except less work. For protocols where you would have had to choose aliquots of reagents like ligase buffer or polymerase that you had previously reserved from your own inventory, appropriate volumes of those reagents are now automatically provisioned from within the protocol and pricing is rolled into the cost of the run accordingly. 

For developers submitting custom protocols, this change means switching over to using the provision instruction within scripts as you would a transfer (with some special considerations, read below). Resource IDs for use in the provision instruction can now be found by clicking on a given resource within the catalog.

The most appropriate way to use provision is to include as few provision instructions for a resource within a protocol. Calculate the total volume of each reagent you’ll need for the protocol you’re writing and provision that amount into the appropriate container type(s). Be sure to keep our design considerations and container dead volumes in mind as you do this. Additionally, consider the storage condition of the container you’re provisioning a resource into if you don’t plan to discard it after a run. For example, if you would like to transfer 5ul of water into every well of a 96 well plate, provision at least 495uL of water into a new tube and distribute or transfer it from there. Do not provision 5ul from the common stock into each of your 96 wells. Provisioning once will decrease freeze/thaw cycles and preserve Transcriptic's common stock.

There are also several limitations for developers using provision within their scripts:

  • the minimum volume you can provision of a given resource is 2 microliters (with the maximum being the maximum volume of the container you’re provisioning into)
  • you may only provision a resource into a maximum of 12 wells per provision instruction
  • a maximum of 3 provision instructions specifying the same resource should exist within one protocol

One place you can’t use the new provision instruction (for now) is for 6- and 1-well agar-filled plates. These are the only resources that still use the reservation system instead of provision until we extend this feature for use with solid resources.

In the meantime, the following code can be imported to your script to reserve plates with agar and the antibiotic of your choice:

from autoprotocol.container import Container
from autoprotocol.protocol import Ref

def ref_kit_container(protocol, name, container, kit_id, discard=True, store=None):
    kit_item = Container(None, protocol.container_type(container))
    if store:
        protocol.refs[name] = Ref(name, {"reserve": kit_id, "store": {"where": store}}, kit_item)
        protocol.refs[name] = Ref(name, {"reserve": kit_id, "discard": discard}, kit_item)

def return_agar_plates(wells):
        Dicts of all plates available that can be purchased.
    if wells == 6:
        plates = {"lb-broth-50ug-ml-kan": "ki17rs7j799zc2",
                  "lb-broth-100ug-ml-amp": "ki17sbb845ssx9",
                  "lb-broth-100ug-ml-specto": "ki17sbb9r7jf98",
                  "lb-broth-100ug-ml-cm": "ki17urn3gg8tmj",
                  "noAB": "ki17reefwqq3sq"}
    elif wells == 1:
        plates = {"lb-broth-50ug-ml-kan": "ki17t8j7kkzc4g",
                  "lb-broth-100ug-ml-amp": "ki17t8jcebshtr",
                  "lb-broth-100ug-ml-specto": "ki17t8jaa96pw3",
                  "lb-broth-100ug-ml-cm": "ki17urn592xejq",
                  "noAB": "ki17t8jejbea4z"}
        raise ValueError("Wells has to be an integer, either 1 or 6")

Example Usage:

import json
from autoprotocol.protocol import Protocol
# assuming you've pasted the above helper code into a file called
from reserve_plates import *

protocol = Protocol()

z10b = protocol.ref("Zymo10B", None, "micro-1.5", discard=True)
# provision bacteria
protocol.provision("rs16pbjc4r7vvz", z10b.well(0), "50:microliter")
# dilute with LB
protocol.provision("rs17bafcbmyrmh", z10b.well(0), "350:microliter")
protocol.mix(z10b.well(0), "150:microliter")
myplate = ref_kit_container(protocol,
for i in range(0,6):
    protocol.spread(z10b.well(0), myplate.well(i), "60:microliter")
protocol.incubate(myplate, "warm_37", "16:hour")

print json.dumps(protocol.as_dict(), indent=2)

We're always looking for feedback as we release new features, so feel free to reach out. You can contact us via email at [email protected] or join the new Transcriptic Community.

Adding potential energy: Transcriptic's Series A

Today I'm excited to announce that we've raised approximately $8.5 million in a Series A financing, bringing the total investment in the company to a little over $14 million. The round was led by Data Collective with participation from IA Ventures, AME Cloud Ventures, Silicon Valley Bank, 500 Startups, MITS Fund, Y Combinator, Paul Buchheit, and a bunch of other angels. The round officially closed at the very end of December, 11 months after we raised a $2.8M "Series Seed" led by IA Ventures.

As Dalton Caldwell of YC likes to say, raising money is like having gas in the tank of your car: it gives you potential, but you haven't actually gone anywhere yet. It's very important not to confuse fundraising with actual success.

In the last month alone we've released a new way to launch protocols via the web, an easier way to buy commercial kits directly through your Transcriptic account, split off our protocol language, Autoprotocol, into an open-source project, and completely overhauled our documentation in addition to adding hundreds of minor features and bugfixes throughout the Transcriptic experience. We're now almost 30 people and have lab ops running around the clock on most days.

When we started Transcriptic, we set out with the goal of giving the life sciences the same structural advantages that web has enjoyed, making it possible for two postdocs with a laptop in a coffee shop to run a drug company without the need for millions of dollars in capital equipment or lab space. To be clear, we are not there yet. However, with an incredible team and set of investors and partners, we are now in the rare and fortunate position of being able to take a real "shot on goal" on a truly large and interesting problem.

We're currently hiring for a bunch of positions. Specifically, if your're either a:

you should definitely get in touch.

Buying Reagents Through Transcriptic

Your own lab is stocked full of the commercially-available reagents and kits you use every day. But when you want to use those same reagents on Transcriptic, until now you've had to aliquot them out by hand and ship them individually (if your organization even allows it!)

Today that process is becoming much, much easier. Just click on the 'Catalog' tab in your inventory and type in the name of the reagent or kit you're looking for, and reserve as much as you need, and it will be instantly available in your inventory for use. In most cases, it's actually cheaper to buy reagents through Transcriptic than it would be to buy them directly from a vendor—instead of having to purchase a $1,000 kit with 500 reactions, you can just buy the amount you need, when you need it.

If your experiment uses only synthesized DNA and commercially-available reagents, you can run it completely on Transcriptic without ever having to touch a pipette.

We take care of purchasing, storage and sample tracking, so you can be sure you'll never get an expired reagent or an enzyme that a grad student left on the bench for two hours last week. Every tube and aliquot you purchase through Transcriptic is marked with its lot number, expiration date, and volume.

We hope this gets you one step closer to being able to put down that pipette for good, and run better, more reproducible experiments.

As always, we'd love to hear your thoughts and feedback—get in touch!

An Easier Way To Launch Protocols

The Transcriptic platform is a reliable, repeatable and extremely flexible tool for running biology experiments. But until now, the only way to take advantage of that power has been to write code.

Today we're launching an all-new, easy-to-use interface for browsing and executing pre-written protocols. No longer will you need to understand JSON and POST requests to start a qPCR reaction: just find an appropriate protocol from the repository, fill in a few fields and click "Launch". It couldn't be simpler.

These protocols are built on top of the Autoprotocol open standard, meaning they can be shared, reused and built upon. In fact, we've already written and released a Core Library of standard protocols. Protocols are versioned, so you're guaranteed that a protocol won't change unexpectedly from one execution to the next. When you want to repeat an experiment, you can be sure you're getting exactly the same protocol.

With a little Python knowledge, you can easily create and upload your own protocols for use on Transcriptic. We've written a short tutorial to get you up and running with Autoprotocol and quickly. (Of course, there's nothing special about Python—any language can generate Autoprotocol!)

While Transcriptic is still a fairly technical system, this system makes it dramatically easier to use, and removes the need to program to get started. Coupled with the Autoprotocol Standardautoprotocol-python library and Autoprotocol Core Protocols it's easy to start with the pre-built protocols available and switch to more powerful tools once you become productive with the basics and start yearning for greater flexibility.

We're eager to hear your feedback of what you think and how it works for you. Try it out, and don't be afraid to get in touch!

The Autoprotocol Language Standard

Expressing protocols for biological research in a way that is both human and computer-readable is a fundamental prerequisite for Transcriptic's cloud laboratory model. This is a topic we've written about before and our basic answer has been described in great technical detail in our Low-Level API documentation for some time.

Over the last several months we’ve revised and further developed the language we use to encode biological protocols to be run on our robotics and are proud to formally release it as an open standard we’ve named Autoprotocol.

Unlike many of the other formal protocol languages out there, only Autoprotocol is:

  • useful today: if you implement a method for Autoprotocol using supported instructions, you can run it right now on Transcriptic and get real data back,
  • fully synthesizable, meaning exactly zero human interpretation is necessary,
  • fully schedulable, meaning you can know ahead of time what will run and when for a given protocol,
  • and now, open source.

Today we're excited to announce that we've published a separate, independent website detailing Autoprotocol's semantics under Autoprotocol is an open standard that anyone can use to express and share protocols intended to be read by both laboratory automation machinery and humans. We hope to contribute to better science by providing an unambiguous, structured language for precisely describing protocols and by providing a platform for scientists to collaborate and iterate on those protocols. Expressing and sharing protocols in this way allows for more reproducible experiments that produce more meaningful results. The Autoprotocol standard itself is open to contributions and improvements from the community. Come collaborate with other forward-thinking biologists who are embracing the future of scientific research!

But wait, there’s more! If the structure of Autoprotocol looks intimidating to you, never fear. Today we’ve also released a Python library to make it easy to generate protocols with Autoprotocol. With this comes Autoprotocol Core, a library of standard protocols in order to provide examples to follow and allow you to avoid the need to write any code at all for some common use cases.

YC + Transcriptic

Today we're extremely excited to announce a new partnership with Y Combinator to help get a new generation of lean biotech companies off the ground.

Long story short, we're offering $20,000 in Transcriptic Platform credits to all YC biotech companies (past, present and forseeable future) to help them test out their ideas faster without needing to invest in any equipment or spend any time building out a lab. We'll make our Implementation Scientists available to all YC companies to help them think through and design their experimental protocols and ensure that they're able to be successful on Transcriptic.

Separately, YC is also investing in us. Even though we've already raised nearly $6 million and are already 18 people, the YC network is hands down the best community I've seen in my time in Silicon Valley and we're very excited about becoming part of it. We're hoping that this is the start of a long and interesting relationship, of which the Platform credits deal is just the beginning.

There's never been a better time to start a biotech company, and as YC grows and evolves it's making biotech a big new focus. If you're considering starting a company, it's not too late to apply late for the upcoming Winter 2015 batch.