How a stint in Silicon Valley unleashed one researcher’s business skills
Tomasz Głowacki’s career now straddles academia and industry, thanks to his participation in a leadership programme organized by the Polish government.
In 2007, when I started work as a research and teaching assistant at Poznań University of Technology in Poland (a job that straddled bioinformatics research and teaching discrete mathematics, algorithms and data structures), I thought academia would be a lifelong career. I enjoyed the intellectual freedom, chance to work on challenging problems and travel opportunities.
Shortly after defending my computer-science PhD thesis in 2013, I secured a place on the Polish government’s Top 500 Innovators initiative, a nine-week programme in research commercialization and management at universities with high positions in the Academic Ranking of World Universities. It was set up because the Polish government thought a lack of cooperation between researchers and business was one of the main reasons for the country’s low position in European Innovation Scoreboard rankings.
The focus at my interview was how to commercialize my research results. I was asked about factors such as potential customers, business models and pricing. Two months later, I was one of 500 scientists sent either to the University of California, Berkeley; Stanford University, California; or the University of Cambridge, UK.
The goal was to learn from the very best researchers and business practitioners. While at the Walter A. Haas School of Business at Berkeley, I spent time with researchers, practitioners and entrepreneurs from Silicon Valley. What surprised me the most was the marriage between business and academic institutions in California.
Lecturers shared their experiences of research commercialization, business and start-up firms. This was very different from Poland, where a scientific career does not recognize commercial activities in terms of cooperation between business and academia. In my experience, many Polish scientists see commercialization activities as a roadblock to their academic careers.
During the Berkeley training, I heard how PhD students can successfully transition into business. These lectures were delivered by Peter Fiske, who is now director of the Water Energy Resilience Research Institute at Berkeley Lab, and whose career straddles both industry and academia. Fiske focused on transferable skills between academia and business, covering data analysis, resourcefulness, technological awareness, resilience, project management, problem solving, English proficiency and good written communication. Fiske is a strong advocate of the need to market yourself as a scientist. Mark Rittenberg, a business and leadership communications specialist at Haas School of Business, taught us about the power of communication and storytelling. As scientists, we focus mostly on research results. We tend to think that the content we present is enough to sell ourselves. But in business, how you present yourself, self-confidence, an interesting story and non-verbal communication are of at least the same importance.
The innovators programme included one-day visits to technology companies in Silicon Valley, and the opportunity to undertake internships at some of them. I visited Google, the software companies Splunk and Autodesk, as well as NASA and biotechnology firm Genentech. These visits helped me to understand that ambitious work and challenging problems are not just the domain of universities.
I did a three-week internship at PAX Water Technologies in Richmond, California, where I was one of five Polish scientists who set up an interdisciplinary team to work on reducing household water consumption. This was a long way from our research topics, and a new area for all of us. Willingness to learn new things, self-curiosity, creativeness and being open to unexplored areas helped us to drill down into the problem and to propose a solution. All of these are standard skills for a scientist.
The programme helped me to understand that scientists can be effective and successful outside academia, and that the business world is full of challenging problems to work on.
But the most important conclusion for me is that the applied aspect of what I do matters the most. The best fit for me seemed to be a transition into business.
Between June and September 2013, after completing the innovators programme, I applied for several research and development positions in business. I prepared a long CV that covered my research achievements. No one got back to me.
It was an important lesson. As scientists, we have to understand how our skills fit current job-market demands. So I connected with some old university friends who were working in business to discuss their interview experience.
I decided to revamp my CV by making the description of my education shorter and focusing on my transferable skills; I included organizational skills, experience of data-analysis techniques, language skills and my structured approach to problem solving.
As scientists we focus more on problems and solutions when we describe our work. But a potential business employer is more interested in how you get there. You should focus on the tools and methods you have used, knowledge of foreign languages, and how you organize and report your work.
In 2013, I found a job as an analyst at BAE Systems Applied Intelligence at its new offices in Poznań, working with IT systems and insurance data to detect customer fraud.
A year later, I discussed my transition with Fiske, who told me: “Now that you are on the other side, don’t lose touch with your friends in academia — seek ways to help them be more relevant to the outside world.”
I wanted to give something back and to find my own way to contribute to the academic world. I am now head of product development at Analyx, an international marketing data-analytics company, and also work part-time at Poznań School of Banking as a business practitioner, teaching project management as well as systems analysis and design. I discuss the real business cases I face with my students. I also organize lectures and meetings for students with business experts, chief executives and consultants. Some of these have started long-term academic collaborations, and they provide a great opportunity for students to learn from practitioners and to land internships. I have managed to organize a master’s programme between academia and business. Students have the chance to get involved in hot industry topics supervised by business experts, and to present results and defend their theses at their universities. Teaching based on my personal experience is more satisfying for me. Leaving academia was not a failure. It helped me to explore new opportunities, to better understand my professional expectations and to find the career path that fits me best.
This is an article from the Nature Careers Community, a place for Nature readers to share their professional experiences and advice. Guest posts are encouraged. You can get in touch with the editor at firstname.lastname@example.org.
California: America's Industrial R&D Powerhouse
SPOTLIGHT ON CALIFORNIA The Golden State has been at the forefront of private sector innovation in the United States for many years. What factors lie behind its success? IN 1938, Bill Hewlett and Dave Packard, two electrical engineering graduates from Stanford University, started building audio oscillators in a garage in Palo Alto, California. By 1962, their company, Hewlett-Packard (HP), was listed in Fortune magazine's top 500 US companies by revenue. In 1999, it spun off its measurement-instruments business into Agilent Technologies, which broke the record for the largest initial public offering in Silicon Valley history. HP is now one of the world's leading electronics manufacturers, with revenues of US$126 billion in 2010 and more than 320,000 employees worldwide. HP's iconic story — along with those of Apple, Intel, Yahoo and Google — has influenced nearly every fledgling Californian company hoping to repeat its success. It also highlights one of the state's defining features: its strength in industrial research and development (R&D). " California is a key marketplace for the exchange of ideas from around the globe. " Darlene Solomon, Agilent Technologies According to the US National Science Foundation (NSF), California businesses invested US$64 billion in R&D in 2007 — more than Michigan, Massachusetts and New Jersey combined. Overall, California accounts for 22 percent of all R&D in the United States. A long history of high-tech breakthroughs is just one of the factors that have made the Golden State the industrial R&D powerhouse that it is today. It has a “whole ecosystem of innovation”, says Darlene Solomon, chief technology officer at Agilent Technologies, based in Santa Clara. A January 2011 study commissioned by northern Californian life science trade association BayBio and the California Healthcare Institute (CHI) expands on this further, listing the following factors as having helped the state's biomedical industry to thrive: leading-edge science; experienced venture capital; a diverse, well-educated workforce; a group of serial entrepreneurs; a culture that appreciates risk-takers and that does not penalize failure; healthy scepticism about time-honored institutions; and freedom to ignore boundaries. In addition, California's world-class public and private universities attract billions of dollars in federal research funding and produce thousands of US postdoctoral scientists and engineers each year. The state is also home to national laboratories such as Lawrence Berkeley and Lawrence Livermore. These elements and more apply across industries — from biotechnology to computer technology to renewable energy — and help drive job creation, even in tough economic times. Clusters of innovation California boasts a diverse range of industries spread across several major regional clusters, including the San Francisco Bay Area, Sacramento, Los Angeles and San Diego. In northern California, Silicon Valley encompasses a chain of cities south of San Francisco — including Menlo Park, Palo Alto, Sunnyvale and San Jose — but the high-tech companies whose products gave the area its name are actually spread throughout the wider San Francisco Bay Area. The semiconductor research the valley is famous for is now translating into solar energy R&D, which makes use of the silicon and thin-film manufacturing technology perfected there. The city of South San Francisco, home to Genentech, is known for its concentration of biotechnology and pharmaceutical companies. GENENTECH Biotechnology company Genentech, based in South San Francisco, anticipates continued job growth in the next decade. In southern California, the San Diego area hosts several institutions that have made the city a hub for biomedical research, such as the University of California, San Diego (UCSD), the Scripps Research Institute and the Salk Institute for Biological Studies. “San Diego has grown up over the last 30 years or so as one of the premier areas for doing biotechnology,” says Paul Laikind, chief business officer of the Sanford-Burnham Medical Research Institute. Laikind, based at Sanford-Burnham's headquarters in La Jolla, north-west San Diego, says biotechnology companies in the city are concentrated in a small area. “Because of that, it's a very collaborative entrepreneurial environment,” he explains. A non-profit institute, Sanford-Burnham has taken advantage of San Diego's industrial infrastructure to help commercialise its research: since 1987 it has spun off about a dozen start-up companies. Laikind himself founded four start-up companies in San Diego, all of which went public, before joining Sanford-Burnham in 2009. He says those in the region involved in biotechnology share a desire to achieve results by working together rather than competing with each other: “Our competition is whether we can make a drug that can work or not, which means a lot of collaboration between companies and institutions like ours.” A further geographical advantage of California is the state's west coast location, which makes it a natural crossroads for international scientists and engineers. “California is a key marketplace for the exchange of ideas from around the globe,” says Agilent's Solomon. “Especially as Asia has taken off, I think California has been positioned [in the market] very well as a point of access and a good cultural fit in terms of that emerging growth.” Money magnet Although California's domination in industrial R&D has been achieved largely through the efforts of the private sector, the state does provide generous incentives for businesses to do science. Companies that increase their R&D investment from the previous year get a tax credit equivalent to 15 percent of the difference, says Andrea Jackson, director of state and government affairs for Genentech. “[The California government is] always incentivizing companies to do more R&D,” she says. According to the California Budget Project, which carries out independent fiscal and policy analysis, 2,483 corporations claimed US$1.2 billion in R&D credits in 2008. In return, companies in California are generous about reinvesting their earnings in R&D. Agilent dedicates around 10 percent of its roughly US$6.5 billion annual revenue to R&D globally, a proportion that Solomon says is above average among its peers. “In some of our businesses, where we're focusing on future growth, we're investing far more than that 10 percent,” she adds. California also attracts far and away the most venture capital (VC) in the United States — US$11.6 billion in 2010, nearly five times as much as the second ranked state, Massachusetts. Furthermore, California ranks first in the country in number of jobs and revenues for venture-backed companies, according to a 2011 study by global business analysts IHS for the US National Venture Capital Association, with 60 percent of the VC investments in California going to the software, energy, and biotechnology sectors. Academic prowess SANFORD-BURNHAM MEDICAL RESEARCH INSTITUTE Sanford-Burnham Medical Research Institute, a non-profit institute, has spun off around a dozen companies since 1987. Industrial innovation in California is well supported by its academic institutions. Stanford University, a private institution, is based at the heart of Silicon Valley and fosters strong relationships with companies — many of which are based at the Stanford Research Park, founded in 1951 when the university leased some of its land to emerging technology firms. The research park offers several incentives to encourage industry-university interactions: businesses are able to sponsor joint research projects with Stanford faculty and students, invite faculty to join their boards or act as consultants, offer internships to students and use the university's libraries. SRI International, a non-profit contract research institute, split off from Stanford University in 1970 and now employs more than 2,100 people. The institute has conducted research for over 90 private and non-profit businesses, and also licenses and commercializes the technology it develops with federal funds. Norman Winarsky, SRI's vice president of ventures, says its four spin-off companies that have gone public are now worth US$20 billion. The University of California (UC) has also forged enduring partnerships and collaborations with industry. The UC system, spread across 10 campuses, is the state's flagship higher education institute and is a powerful engine for job creation, says Steve Kay, dean of the division of biological sciences at UCSD. The university has “generated the pipeline of trained scientists and technologists that has really fed into the high-tech, the biotech, and now, more recently, the clean-tech explosions,” he says. A UCSD study published in February 2011 revealed that the 156 active UCSD-related companies are directly responsible for 18,140 jobs. The UC system also hosts four Gray Davis Institutes for Science and Innovation, each a collaboration between several campuses, that are purposed with accelerating technology transfer and increasing interactions between the state, UC and industry. They are the Center for Information Technology Research in the Interest of Society (CITRIS), the California Institute for Quantitative Biosciences (QB3), the California NanoSystems Institute (CNSI) and the California Institute for Telecommunications and Information Technology (Calit2). California industry also provides the most support for local academic R&D in the United States. During the 2009 fiscal year, industry-financed R&D expenditures at Californian universities and colleges totalled US$506 million, according to the NSF. Staying ahead Funding for higher education, however, has been harder to come by in the wake of the recent economic downturn. The UC system is facing financial challenges as a result of the state budget deficit. For the 2010 fiscal year, UC had a budget shortfall of US$1 billion, which it has tried to make up with faculty furloughs, tuition increases and programme cuts. On a more positive note, certain research avenues are just starting to grow. In 2004, voters in California passed Proposition 71, a US$3 billion bond issue to fund stem-cell research in the state. The California Institute for Regenerative Medicine, a regulatory agency, allocates the funds. In 2010, as a result of those grants, five new stem-cell research facilities were dedicated at UC Davis, UC Los Angeles (UCLA), UC Irvine, Stanford University, and the University of Southern California in Los Angeles. A sixth centre, the Sanford Consortium for Regenerative Medicine, is under construction in San Diego and due to open in 2011 for collaborative stem-cell research between the Salk Institute, Scripps Institute, UCSD and Sanford-Burnham. The hope is that the research will eventually provide new opportunities to spin out companies focused on stem-cell therapies. California has also been hit hard with unemployment, which now exceeds 12 percent. The biomedical research industry, though, has not shed as many jobs as other high-tech sectors, according to the BayBio/CHI study. The biofuels industry is also one of the fastest growing in terms of job creation, says Gail Maderis, president and chief executive of BayBio. Agilent's Solomon says there are jobs available, but workers need to be flexible. For new recruits, the company looks for ‘T-shaped’ people — researchers who are highly skilled in one area but who can also communicate horizontally across fields. Winarsky of SRI adds that scientists working on innovative research have good job prospects: “They are high-premium people.” A question on many people's minds is how the state, strapped for funds, will deal with its budget crisis. Genentech's Jackson says she does not anticipate the corporate R&D tax credit being trimmed back. “So far the legislature has felt a compelling interest to keep those tax credits in place to continue to grow the industry,” she says. Pharmaceutical companies like Genentech take comfort in the fact that their products remain necessary, even in lean times. “We're in a flat growth spell right now, but the industry's pipeline is healthy,” Jackson says. “We anticipate continued job growth in the next decade.” California's history of innovation, from HP's inception to today's efforts in stem-cell research and solar technology, will provide a strong foundation for future growth.
Finding Job Satisfaction as a Humanitarian Researcher
Panagiotis Vagenas left Yale University to advise a non-profit on research design and quality. What did you do before Yale? I’m from Greece originally. In 1996 — when I was 17 — I moved to London, UK. I studied biochemistry for my degree and did a PhD in immunology. When I graduated I moved to the Population Council labs at the Rockefeller University in New York to start my postdoc. What did you study? I worked on basic research in HIV. What’s always motivated me is trying to help people — to have a meaningful career in that sense. So in summer 2010 I moved to Yale School of Public Health and did a master’s in public health (MPH), and went on to join the faculty at the Yale School of Medicine in 2013. And then you moved to your current job. Yes – I’d just applied for a K01 grant which didn’t get awarded at the time, which was a big shock. So I figured I should do something different, and what still motivated me was making an impact on people’s lives. So I found the job I have now with Project Concern International (PCI). Where did you get the motivation to make an impact on people’s lives? Really I grew up in an environment that was like that. My mum’s a psychiatrist and my dad’s a civil engineer in the public sector, so while they’re not doing the kind of work I’m doing, it’s always been for the public good. And then I loved biology at school so that was the start of the path that got me here. What does PCI do? Our mission is to enhance health, end hunger, overcome hardship. It’s a really broad mission that wants to help people in the developing world lead better lives. I think a lot of organisations like PCI – which is funded primarily by the US government but also from other sources – appreciate research more and more in tracking the impact and sustainability of their work. Could you give me an example? I was recently in Ethiopia where myself and other members of my team designed a sustainability study for an initiative we ran to empower women in the region. The project ended six years ago for PCI, but women are still meeting and benefitting from our work there. It’s not the old method of development – hand outs, a short project in the field – we want to go into these programmes knowing the impact is sustained. How are you finding the head office in San Diego? I’m enjoying the outdoors culture here in California. Everybody’s out; everybody’s running and hiking and enjoying the beaches year-round. I meet a lot of people from work. My parents came to visit last April and they really enjoyed it. San Diego is paradise. You can find more of this interview here.
Is Science In Trouble?
A Conversation With Colin Camerer About the Replication Crisis News Writer: Emily Velasco Credit: Caltech If there's a central tenet that unites all of the sciences, it's probably that scientists should approach discovery without bias and with a healthy dose of skepticism. The idea is that the best way to reach the truth is to allow the facts to lead where they will, even if it's not where you intended to go. But that can be easier said than done. Humans have unconscious biases that are hard to shake, and most people don't like to be wrong. In the past several years, scientists have discovered troubling evidence that those biases may be affecting the integrity of the research process in many fields. The evidence also suggests that even when scientists operate with the best intentions, serious errors are more common than expected because even subtle differences in the way an experimental procedure is conducted can throw off the findings. When biases and errors leak into research, other scientists attempting the same experiment may find that they can't replicate the findings of the original researcher. This has given the broader issue its name: the replication crisis. Colin Camerer Colin Camerer, Caltech's Robert Kirby Professor of Behavioral Economics and the T&C Chen Center for Social and Decision Neuroscience Leadership Chair, executive officer for the Social Sciences and director of the T&C Chen Center for Social and Decision Neuroscience, has been at the forefront of research into the replication crisis. He has penned a number of studies on the topic and is an ardent advocate for reform. We talked with Camerer about how bad the problem is and what can be done to correct it; and the "open science" movement, which encourages the sharing of data, information, and materials among researchers.What exactly is the replication crisis? What instigated all of this is the discovery that many findings—originally in medicine but later in areas of psychology, in economics, and probably in every field—just don't replicate or reproduce as well as we would hope. By reproduce, I mean taking data someone collected for a study and doing the same analysis just to see if you get the same results. People can get substantial differences, for example, if they use newer statistics than were available to the original researchers. The earliest studies into reproducibility also found that sometimes it's hard to even get people to share their data in a timely and clear way. There was a norm that data sharing is sort of a bonus, but isn't absolutely a necessary part of the job of being a scientist.How big of a problem is this? I would say it's big enough to be very concerning. I'll give an example from social psychology, which has been one of the most problematic areas. In social psychology, there's an idea called priming, which means if I make you think about one thing subconsciously, those thoughts may activate related associations and change your behavior in some surprising way. Many studies on priming were done by John Bargh, who is a well-known psychologist at Yale. Bargh and his colleagues got young people to think about being old and then had them sit at a table and do a test. But the test was just a filler, because the researchers weren't interested in the results of the test. They were interested in how thinking about being old affected the behavior of the young people. When the young people were done with the filler test, the research team timed how long it took them to get up from the table and walk to an elevator. They found that the people who were primed to think about being old walked slower than the control group that had not received that priming. They were trying to get a dramatic result showing that mental associations about old people affect physical behavior. The problem was that when others tried to replicate the study, the original findings didn't replicate very well. In one replication, something even worse happened. Some of the assistants in that experiment were told the priming would make the young subjects walk slower, and others were told the priming would make them walk more quickly—this is what we call a reactance or boomerang effect. And what the assistants were told to expect influenced their measurements of how fast the subjects walked, even though they were timing with stopwatches. The assistants' stopwatch measures were biased compared to an automated timer. I mention this example because it's the kind of study we think of as too cute to be true. When the failure to replicate came out, there was a big uproar about how much skill an experimenter needs to do a proper replication.You recently explored this issue in a pair of papers. What did you find? In our first paper, we looked at experimental economics, which is something that was pioneered here at Caltech. We took 18 papers from multiple institutions that were published in two of the leading economics journals. These are the papers you would hope would replicate the best. What we found was that 14 out of 18 replicated fairly well, but four of them didn't. It's important to note that in two of those four cases, we made slight deviations in how the experiment was done. That's a reminder that small changes can make a big difference in replication. For example, if you're studying political psychology and partisanship and you replicate a paper from 2010, the results today might be very different because the political climate has changed. It's not that the authors of the original paper made a mistake, it's that the phenomenon in their study changed. In our second paper, we looked at social science papers published between 2010 and 2015 in Science and Nature, which are the flagship general science journals. We were interested in them because these were highly cited papers and were seen as very influential. We picked out the ones that wouldn't be overly laborious to replicate, and we ended up with 21 papers. What we found was that only about 60 percent replicated, and the ones that didn't replicate tended to focus on things like priming, which I mentioned before. Priming has turned out to be the least replicable phenomenon. It's a shame because the underlying concept—that thinking about one thing elevates associations to related things—is undoubtedly true.How does something like that happen? One cause of findings not replicating is what we call "p-hacking." P-value is a measure of the statistical likelihood that your hypothesis is true. If the p-value is low, an effect is highly unlikely to be a fluke due to chance. In social science and medicine, for example, you are usually testing whether changing the conditions of the experiment changes behavior. You really want to get a low p-value because it means that the condition you changed did have an effect. P-hacking is when you keep trying different analyses with your data until you get the p-value to be low. A good example of p-hacking is deleting data points that don't fit your hypothesis—outliers—from your data set. There are statistical methods to deal with outliers, but sometimes people expect to see a correlation and don't find much of one, for example. So then they think of a plausible reason to discard a few outlier points, because by doing that they can get the correlation to be bigger. That practice can be abused, but at the same time, there sometimes are outliers that should be discarded. For example, if subjects blink too much when you are trying to measure visual perception, it is reasonable to edit out the blinks or not use some subjects. Another explanation is that sometimes scientists are simply helped along by luck. When someone else tries to replicate that original experiment but doesn't get the same good luck, they won't get the same results.In the sciences, you're supposed be impartial and say, "Here's my hypothesis, and I'm going to prove it right or wrong." So, why do people tweak the results to get an answer they want? At the top of the pyramid is outright fraud and, happily, that's pretty rare. Typically, if you do a postmortem or a confessional in the case of fraud, you find a scientist who feels tremendous pressure. Sometimes it's personal—"I just wanted to be respected"—and sometimes it's grant money or being too ashamed to come clean. In the fraudulent cases, scientists get away with a small amount of deception, and they get very dug in because they're really betting their careers on it. The finding they faked might be what gets them invited to conferences and gets them lots of funding. Then it's too embarrassing to stop and confess what they've been doing all along.There are also faulty scientific practices less egregious than outright fraud, right? Sure. It is the scientist who thinks, "I know I'm right, and even though these data didn't prove it, I'm sure I could run a lot more experiments and prove it. So I'm just going to help the process along by creating the best version of the data." It's like cosmetic surgery for data. And again, there are incentives driving this. Often in Big Science and Big Medicine, you're supporting a lot of people on your grant. If something really goes wrong with your big theory or your pathbreaking method, those people get laid off and their careers are harmed. Another force that contributes to weak replicability is that, in science, we rely to a very large extent on norms of honor and the idea that people care about the process and want to get to the truth. There's a tremendous amount of trust involved. If I get a paper to review from a leading journal, I'm not necessarily thinking like a police detective about whether it's fabricated. A lot of the frauds were only uncovered because there was a pattern across many different papers. One paper was too good to be true, and the next one was too good to be true, and so on. Nobody's good enough to get 10 too-good-to-be-trues in a row. So, often, it's kind of a fluke. Somebody slips or a person notices and then asks for the data and digs a little further.What best practices should scientists follow to avoid falling into these traps? There are many things we can do—I call it the reproducibility upgrade. One is preregistration, which means before you collect your data, you publicly explain and post online exactly what data you're going to collect, why you picked your sample size, and exactly what analysis you are going to run. Then if you do very different analysis and get a good result, people can question why you departed from what you preregistered and whether the unplanned analyses were p-hacked. The more general rubric is called open science, in which you act like basically everything you do should be available to other people except for certain things like patient privacy. That includes original data, code, instructions, and experimental materials like video recordings—everything. Meta-analysis is another method I think we're going to see more and more of. That's where you combine the results of studies that are all trying to measure the same general effect. You can use that information to find evidence of things like publication bias, which is a sort of groupthink. For example, there's strong experimental evidence that giving people smaller plates causes them to eat less. So maybe you're studying small and large plates, and you don't find any effect on portion size. You might think to yourself, "I probably made a mistake. I'm not going to try to publish that." Or you might say, "Wow! That's really interesting. I didn't get a small-plate effect. I'm going to send it to a journal." And the editors or referees say, "You probably made a mistake. We're not going to publish it." Those are publication biases. They can be caused by scientists withholding results or by journals not publishing them because they get an unconventional result. If a group of scientists comes to believe something is true and the contrary evidence gets ignored or swept under the rug, that means a lot of people are trying to come to some collective conclusion about something that's not true. The big damage is that it's a colossal waste of time, and it can harm public perceptions of how solid science is in general.Are people receptive to the changes you suggest? I would say 90 percent of people have been very supportive. One piece of very good news is the Open Science Framework has been supported by the Laura and John Arnold Foundation, which is a big private foundation, and by other donors. The private foundations are in a unique position to spend a lot of money on things like this. Our first grant to do replications in experimental economics came when I met the program officer from the Alfred P. Sloan Foundation. I told him we were piloting a big project replicating economics experiments. He got excited, and it was figuratively like he took a bag of cash out of his briefcase right there. My collaborators in Sweden and Austria later got a particularly big grant for $1.5 million to work on replication. Now that there's some momentum, funding agencies have been reasonably generous, which is great. Another thing that's been interesting is that while journals are not keen on publishing a replication of one paper, they really like what we've done, which is a batch of replications. A few months into working on the first replication paper in experimental economics funded by Sloan, I got an email from an editor at Science who said, "I heard you're working on this replication thing. Have you thought about where to publish it?" That's a wink-wink, coy way of saying "Please send it to us" without any promise being made. They did eventually publish it.What challenges do you see going forward? I think the main challenge is determining where the responsibility lies. Until about 2000, the conventional wisdom was, "Nobody will pay for your replication and nobody will publish your replication. And if it doesn't come out right, you'll just make an enemy. Don't bother to replicate." Students were often told not to do replication because it would be bad for their careers. I think that's false, but it is true that nobody is going to win a big prize for replicating somebody else's work. The best career path in science comes from showing that you can do something original, important, and creative. Replication is exactly the opposite. It is important for somebody to do it, but it's not creative. It's something that most scientists want someone else to do. What is needed are institutions to generate steady, ongoing replications, rather than relying on scientists who are trying to be creative and make breakthroughs to do it. It could be a few centers that are just dedicated to replicating. They could pick every fifth paper published in a given journal, replicate it, and post their results online. It would be like auditing, or a kind of Consumer Reports for science. I think some institutions like that will emerge. Or perhaps granting agencies, like the National Institutes of Health or the National Science Foundation, should be responsible for building in safeguards. They could have an audit process that sets aside grant money to do a replication and check your work. For me this is like a hobby. Now I hope that some other group of careful people who are very passionate and smart will take up the baton and start to do replications very routinely.