Science for Societal Progress

Dennis Eckmeier, Science for Progress

We believe that our current and future challenges as a species can only be met using evidence based solutions. We interview people who work to improve academia, or focus on the interactions between academia and society. We want the people who follow our podcast gain insight into the way academics research the world and ourselves, and how they can benefit from our work.

All Episodes

Oct 2020

29 min 34 sec

Jul 2019

36 min 30 sec

Dec 2018

37 min 16 sec

I talked with Dr. Nuno Henrique Franco about animal welfare in scientific research. The questions we address are Why do we do animal experiments? What can be done to reduce the amount of animal experiments? What are the regulations for animal research? What do scientists think about the ethics of animal experimentation? What is being done for outreach? Nuno Franco is an expert on animal wellbeing in scientific research. He works as an assistant researcher at the “Instituto de Investigação e Inovação em Saúde†(Institute for Health Investigation and Innovation), or short i3S in Porto, Portugal. He worked on animal welfare, and animal ethics regulation, and he currently coordinates the national network of animal welfare bodies. Why do we do animal experiments? Animal research is mostly done in the context of health research. Researchers use animals instead of humans, because human experimentation at this level would be unethical. They also don’t make good study subjects from a practical standpoint. The history of medical research shows that animal research translates well to humans in most cases. The consensus is that animal suffering should be minimized In order to make animal experimentation conduct as humanely as possible, international legislation applies the 3R principle: Replace animal use where animal-free methods would provide equal or better results. Reduce the amount of animals used per experiment, and make the experiments as informative as possible. Refine experimental protocols to cause as little pain, suffering, and distress as possible. When talking about methods to test the toxicity of compounds, animal testing is successfully replaced by animal-free methods. For example by using cells grown in a dish. As soon as they have been validated, these methods can be implemented on a larger scale. This makes them very cost efficient. The development, however, is expensive and it does involve animal experiments. Animal testing for cosmetics was banned by the EU, and it is further illegal to sell cosmetics for which new tests on animals were conducted. Only compounds used for medical purposes still need to be tested on animals for safety reasons. When we talk about physiological research, the replacement of experiments by animal-free methods is less feasible. Of course there are – and always have been – animal-free methods which have advantages for specific questions. And often studies are animal free up until the point at which the questions concern the whole animal. Animal-free “in vitro†methods are being further developed. The latest innovations being “organs on a chipâ€, and “organoidsâ€. Both represent miniature versions of single organs and have some degree of complexity beyond ‘simple’ cell cultures. What are the limits of animal-free methods? I am a neuroscientist, and in neuroscience, “organoids†have been hyped as “mini brainsâ€. This is a crude exaggerations. While they do have neurons connecting to some degree, they are still far from being actual brains. Some researchers even raised the ethical issues of perfecting neuronal organoids to actual “mini brainsâ€. Could such a brain experience suffering? Anyways, in neuroscience, in the end, we need to study the function of the nervous system in the context of animal behavior. And for that, we need the whole living animal. And this leads us to the final conclusion on the limitations of artificial “in vitro†(cell cultures, organoids, etc) and “in silico†(computer models) methods for studying physiology. In order to recreate the whole physiological system of an animal, and study it, we need to already know everything there is to know about that animal. In other words, we would not need the model anymore. Which regulations do researchers need to comply with? There is a whole lot. The researcher needs to be licensed, the animal keeping facility needs to be licensed, and the project itself needs to be authorized. This means that everybody involved is trained in the 3Rs, animal handling, husbandry, and surgery. There is always a veterinarian on call, etc. To say it with Nuno’s words “there is a whole ecosystem” in place to ensure that all regulations are being complied with. So researchers have to justify the use of animals, and they have to classify the severity of a procedure. This classification is difficult, because it’s highly subjective and biased by culture. Efforts are being made to harmonize this assessment. Anyways, a procedure that is finally judged to inflict severe harm for long periods of time will be rejected. In some cases, the authority will retrospectively analyse whether the claimed severity was exceeded, and the promised benefits were achieved. This is done to improve future cost/benefit analysis for approval decisions. Researchers have the same ethical views as the general public Nuno shared some preliminary results of a side project with us. It appears that researchers actually share the same concerns about animal experimentation as the public. The only difference is that they are more willing to accept the utilitarian argument for the general necessity of animal research. What can researchers do for outreach? Most people, once they know the benefits of animal research, agree that animal research is justified. This of course under the condition that everything it is done humanely. Only a small number of people will not accept utilitarian arguments. Such ethical extremists won’t be reached with outreach. However, providing context of the usefulness of animal research is key to increasing acceptance. The second key element for reaching the public is transparency. Animal researchers are operating based on an implicit social contract with the public. This social contract relies on the trust granted to scientists by the public. But trust needs to be earned. One way to achieve this is to be open about what happens in the laboratories. This is why principal investigators must provide a non-technical summary of their ongoing project. These summaries are available online for the public to read. And there are further new efforts of institutes to be transparent about their procedures. “We are not angels. There are a few instances when things go wrong, and these people have to be called out and sanctions have to be in order […] and some people should not be working with animals. But overall, I would say, almost all researchers working with animals are compassionate, competent people, and the public has to know that!†Do you have questions, comments or suggestion? Email info@scienceforprogress.eu, write us on facebook or twitter, or leave us a video message on Skype for dennis.eckmeier. Become a Patron! Further Information •Dr. Nuno Henrique Franco SfProcur curator profile •Nuno Franco’s blog •Sociedade Portuguesa de Ciências em Animais de Laboratório •EARA – European Animal Research Association •Transparency Agreement on Animal Research in Portugal •EU ban on Animal Testing for Cosmetics •EU news: Testing cosmetics on animals: MEPs call for worldwide ban •Organoids (wikipedia) •Lab-grown ‘mini brains’ produce electrical patterns that resemble those of premature babies •FRAME – independent charity with the ultimate aim of the replacement of animals in medical experiments •RSPCA – Lab Animals – Working to reduce the use and suffering of animals in research and testing. •Norecopa – Norway’s national consensus platform for the 3Rs •The National Centre for the 3Rs – supporting scientists to replace, reduce and refine the use of animals in research in the UK and internationally •Swiss 3R Competence Centre (3RCC) – Promoting research, education & communication regarding the 3Rs •Understanding Animal Research

Dec 2018

32 min 47 sec

Once a month I sit down with my friend and co-host Bart Geurten. We talk about things within and around academia, and exchange opinions on earlier episodes. In this episode, we first talk about the concept of overlay journals in the context of the newly founded community based journal “Neurons, Behavior, Data Analysis, and Theory”. NBDT is a journal for computational neuroscience, and it’s community lead, completely free, open, and not for profit. We then talk about the role researchers should play in the dissemination of science to the public. This discussion has been on the internet for a while. In one of her recent youtube videos, the German science communicator Mai Thi Nguyen-Kim picked it up. She says, scientists should be forced to write summaries for a lay readership for every one of their articles. And in the main section we revisit my interview with Hélène Pidon on GMOs. We talk about the fears we think are behind the anti-GMO sentiments, and why the verdict of the EU court on gene modification was unscientific. Do you have questions, comments or suggestion? Email info@scienceforprogress.eu, write us on facebook or twitter, or leave us a video message on Skype for dennis.eckmeier. Become a Patron! sources: • NBDT website and twitter account • “Das Problem mit wissenschaftlichen Studien” (German language) • Mai Thi Nguyen-Kim • 11: Genetically Modified Crops and the European Union – with Hélène Pidon • Dennis’ guide on being a podcast guest.

Nov 2018

38 min 25 sec

While the number of PhD graduates per year is rising worldwide, the number of proper long-term or permanent positions in academia isn’t. This leaves PhDs with ever decreasing chances of staying in academia. And it means that increasing numbers PhDs stay postdocs for a decade or longer, only to have to leave after all. Amanda (center in the picture), Cleyde (left in picture), and Ian (right in picture) are three former life science postdocs who left academia between 2015 and 2017. When transitioning, they felt isolated from their peer groups who were predominantly academics. They found each other on Twitter seeking advice and got to talk about the challenges one faces when switching careers. So they decided to start the Recovering Academic podcast, which just entered its third season. There is a lot of information about how to write a resume and other more technical advise. The Recovering Academic podcast shares experiences with these practical issues. They speak, for example, about networking and resume writing in their episodes. But what really brought them together was the emotional struggle of leaving the Ivory Tower. So they speak with their guests about the experience of leaving, the reactions of academic peers. The feelings of failure. This is a “crossover” episode of Recovering Academic and Science for Societal Progress. Besides talking about how Amanda, Cleyde and Ian met and why they decided to create Recovering Academic, I wanted to know what they themselves learned from doing the podcast, and which episode they liked the most. We talked about my story, too, which you can listen to in their version of this episode. Recovering Academic Webiste Recovering Academic rotating curation twitter account: @RecoveringAcad Amanda Welch, Scientific Dispatches Consulting, @LadyScientist Cleyde Helena, @DoctorPMS Ian Street, @IHStreet mentioned episode “Finding Your Fungus”

Nov 2018

30 min 7 sec

This episode is the first ‘Q&A’ episode, where my new co-host Dr. Bart Geurten (see episode 8) and I talk about what’s new in academia. Our conversations are free form and may lead us astray here and there. We discuss the concept of ‘merit’ in the natural sciences. And we begin with a quick recap on episode 9, where I talked to Dr. Björn Brembs about the Journal Impact Factor (JIF). The JIF is a metric designed to measure the impact a journal had in the scientific community. There are many problems with how JIFs are generated. What is even worse is the misuse of this metric for estimating the scientific ability of a single author of one article published in a journal. Bart tells us about how he himself took the news about how the JIF can be influenced by the journals, and the reactions of his colleagues. The omnipresent use of the JIF is guiding the decisions of our generation and it’s not questioned enough. Björn Brembs had mentioned how the reviewers at ‘top’ journals blocked his attempts to publish an article about how the JIF is created. The reviewers claimed that everybody would already know about this. Based on Bart’s experience, and the feedback I received on Twitter, this is definitely not the case. Bart suggests bypassing commercial journals to reach the community through society communications and similar outlets (for example ‘DZG news’, an outlet of the German Zoological Society). We point out a couple of characteristics which make a successful scientist in our eyes, and talk about the difficulty of measuring these. One metric in use is the Hirsch index, which also uses citations of each single article. In principle, this is one step better than the JIF. But it has it’s own problems, since the potential citations differ widely, for example by accessibility of the paper (open access versus closed access), dissemination on social media, size of the research field the work is directly relevant to, the type of the article (research report, method paper, review article, etc.). And non of these things have anything to do with the abilities of the authors. What would be merits other than the success of an article? Obviously we didn’t find a solution, but we talk about: having an impact on the field by making tools and data available outreach to the public researcher autonomy novelty of the research approach the ability to turn funding into positive outcomes (efficiently) leadership and management skills Resume Our advise for early career researchers: Go for high journal impact factor journals, and apply to ALL of the grants and awards. Apart from that, we think it is important for the scientific community to move away from the JIF and towards new metrics, Open Science, leadership training for postdocs and young principle investigators, and modern team management techniques. notes and further readings JIF is positively correlated with retraction rate recent German language article by Björn Brembs in labjournal 10 Easy Ways to Increase Your Citation Count: A Checklist The open access advantage considering citation, article usage and social media attention

Oct 2018

36 min 46 sec

Just some announcements this time In contrast to what was promised in the last podcast episode, we don’t have a full question and answer episode this time. I hope this will not happen too often, in future. Dennis is a freelancer now. First thing is that I quit my postdoctoral fellowship to become a freelancer. You can see how I approach this on my website. Basically I want to offer my skills and expertise in scholarship and neuroscience to help people with their academic writing, be it papers or funding applications. This means that I am currently a bit low on finances, which makes financing Science for Progress more difficult, of course. More about how you can help me with that further down. Science for Progress News new volunteers for @sfprocur Susan Leemburg and Katharina Hennig are now helping me to find and curators, and manage the schedule. looking for a facebook page moderator I have not given our facebook page the love it deserves. So I am looking for someone who would share relevant articles on there and in general keeps it lively. If you are interested, send me an email to socialadmin@scienceforprogress.eu inviting opinion pieces I hope you noticed that I made some design changes to make these bog posts more pleasing to the eye, in particular for longer reads. This is because I want to invite writers to publish opinion pieces with us. Sadly, I can not pay for such articles. I would really like to commission pieces to professional writers, but I simply can’t. So if you are thinking about contributing an article despite of that, make sure it includes some promotion for yourself or your own project. More podcast episodes! I want to add some more discussion to the podcast. But because the interview episodes are usually already pretty dense in information and fill 30 minutes easily, I don’t want to add this discussion to the interview. It is also good to have some time between interview and discussion so I can gather some feedback from you, our listeners. So we will alternate interviews and ‘Q&A episodes’, in which we will talk about some news, what is going on in Science for Progress, and then discuss the previous interview. This format should also be about 30 minutes in duration. This also means that we are moving from an episode every three weeks, to an episode every two weeks. Feedback Intersting interview with @brembs about journal impact factors- for people who know about the issues always interesting, for those who don’t even more important! #science #WhatScienceIsImpacting https://t.co/RnQwpajLc5 — Simon Sprecher (@simon_sprecher) 17. September 2018 This is a great comment! Being interesting to people who know about the issue while being important to those who know is pretty much the sweet spot where I want the podcast to be. I hope there will be many more episodes receiving praise like this! I'm listening to @SciForProgress podcast on impact factor. Everyone should listen at least to the 1st 5 minutes of it. When they say this is known: I did not know! And I've been doing science for 10 years now. — Science is not Glamorous (@Science_glamour) 29. September 2018 I have been thinking the exact same thing! I knew things weren’t 100% correct with the Journal Impact Factor, but I didn’t know about the details, either! When Björn Brembs says ‘it is known’ he didn’t mean everybody is aware, but that the information is openly available, if you look for it. Which I think most of us don’t! Well, its wonderfull. As authentic and on spot as everything in the project — Zé (@93Antidote93) 13. September 2018 What more can I say than that this warms my heart. 😀 BECOMING A PATREON COMMUNITY! Become a Patron! As I mentioned further up, I currently do not have a steady income and it may take a while to get there. This is why I need your help to continue investing my time and money into improving and growing Science for Progress activities. For over a year I have created everything you see myself (even the podcast music!), and everything is coming out of my pocket. At first, I was not 100% convinced Patreon would be the way to go. But after looking into my options, I think this is actually the best solution for us, right now. I always wanted Science for Progress to be a community with invested and engaged members. I did want to found an association that can raise tax-deductable donations. However, looking into the details of this, I realize I am not currently in a position to found an association. But a Patreon community that I treat like a social business is the next best thing. A social business does not funnel profits back to their investors but re-invests into the social cause it is working for. For us this means I will use your pledges to invest my time and money into growing and improving Science for Progress outreach. Top priority remains to advance our causes. And yes, this means that if in some Utopian future the pledges rise to cover a full salary and expenses, I would work full-time on Science for Progress. So the perks I offer to Patreon members won’t take away from our cause. Tier 1 and higher members would get access to a Discord text and voice chat server which I plan to use to hold monthly meetings. Tier 2 members would also get access to a “director’s cut” of the podcast that includes some parts that won’t make it into the final episode – I already uploaded the director’s cut for last episode on Patreon. Higher tier members would for example be mentioned on our website, and at the end of a podcast episode. Things like that. I would actually like to hear from you which kinds of perks you’d like to see for different tiers. I hope you are as excited about this as I am and consider to Become a Patron!

Sep 2018

9 min 30 sec

What is the Journal Impact Factor? The Journal Impact Factor is widely used as a tool to evaluate studies, and researchers. It supposedly measures the quality of a journal by scoring how many citations an average article in this journal achieves. Committees making hiring and funding decisions use the ‘JIF’ as an approximation for the quality of the work a researcher has published, and in extension as an approximation for the capabilities of an applicant. JIF as a measure of researcher merit I find this practice already highly questionable. First of all, it appears the formula calculates a statistical mean. However, no article can receive less than 0 citations, while there is no upper limit to citations. Most articles – across all journal – receive only very few citations, and only a few may receive a lot of citations. This means we have a ‘skewed distribution’ when we plot how many papers received how many citations. The statistical mean, however, is not applicable for skewed distributions. Moreover, basic statistics and probability tell us that if you blindly choose one paper from a journal, it is impossible to predict -or even roughly estimate – its quality by the average citation rate, alone. It is further impossible to know the author’s actual contribution to said paper. Thus, we are already stacking three statistical fallacies by applying JIF to evaluate researchers. But this is just the beginning! Journals don’t have an interest in the Journal Impact Factor as a tool for science evaluation. Their interest is in the advertising effect of the JIF. As we learn from our guest, Dr. Björn Brembs (professor for neurogenetics at University of Regensburg), journals negotiate with the private company Clarivate Analytics (in the past it was Reuters) that provides the numbers. Especially larger publishers have a lot of room to influence the numbers above and below the division line in their favor. Reputation is not quality. There is one thing the Journal Impact Factor can tell us: how good the reputation of the journal is among researchers. But does that really mean anything substantial? Björn Brembs reviewed a large body of studies that compared different measures of scientific rigor with the impact factor of journals. He finds that in most research fields the impact factor doesn’t tell you anything about the quality of the work. In some fields it may even be a predictor of unreliable science! This reflects the tendency of high ranking journals to prefer novelty over quality. How does this affect science and academia? The JIF is omnipresent. A CV (the academic resume) is not only judged by the name of the journals in a publication list. Another factor is the funding a researcher has been able to get. However, funding committees may also use JIF to evaluate whether an applicant is worthy of funding. Another point on a CV is the reputation of the advisers, who were also evaluated by their publications and funding. Another important point on a CV is the reputation of the institute one worked at, which is to some degree evaluated by the publications and the funding of their principle investigators. It is easy to see how this puts a lot of power into the hands of the editors of high ranking journals. Björn Brembs is concerned about the probable effect this has on the quality of science overall. If the ability to woe editors and write persuasive stories leads to more success than rigorous science, researchers will behave accordingly. And they will also teach their students to put even more emphasis on their editor persuasion skills. Of course not all committees use JIF to determine who gets an interview. But still the best strategy for early career researchers is to put all their efforts into pushing their work into high ranking journals. What now?! We also talk about possible solutions to the problem. In order to replace the JIF with better measures, Björn Brembs suggests to build a universal open science network. Such a network would allow collecting more data on the scientific rigor and skills of a researcher, directly. The money for this could be raised by stopping to buy subscriptions from large publishing houses. Getting rid of the JIF and moving to open access publishing using private publishers, however, would only shift the problem. High reputation journals would ask for higher submission fees from authors than lower ranking journals. So, instead of using the JIF, committees would judge applicants by the amount of funds they were able to invest in publishing. It would also not solve the problem of researchers trying to persuade editors rather than do rigorous research. But now scientific results of deteriorating quality would be openly accessible by a lay readership, which would be a disservice to the public. In the long run, we need to get rid of journals, completely. One step in the right direction could be something like SciELO, a publicly funded infrastructure for open access publishing in South America. In the meanwhile, and this may come as a surprise, Björn Brembs suggests to stop having committees evaluating researchers at all. There is evidence, he says, that people will always have unjustified biases against women or institutes of lower reputation. So he suggests, once a short list of applications has been selected based on the soundness of the research proposal, funding should be distributed using a lottery system. The evidence tells us that as long as we don’t have a real, objective measure, a random selection is our best option. further reading •Björn Brembs’ blog •Brembs B. (2018). Prestigious science journals struggle to reach even average reliability. •SciELO

Sep 2018

33 min 17 sec

Science compensates for the shortcomings of human cognition. It allows us to apply methods of investigation that are independent of our own subjective notions and irrationality. As a result we have overcome common sense, traditional beliefs, and other misconceptions through thorough investigation. We even describe and utilize phenomena that are as incomprehensible as quantum mechanics, which defies our everyday experience in unimaginable ways. There is, however, a real struggle, here. Just like our brain perceives the non-existent ‘Kanizsa’s Triangle’ (picture on the left), we make certain identifiable mistakes in cognitive thinking, too. This can really impact the way science is conducted and results are interpreted. Because of this it usually takes whole communities of scientists to work out and refine scientific theories. In this episode, I talk about heuristics and cognitive biases in science and society with neuroscientist Dr. Bart Geurten. Bart is no cognitive scientist. He works on motion vision and locomotion in fruit flies. But coming across the work of Daniel Kahneman and Amos Tversky on Prospect Theory, he decided to discuss cognitive biases with students in a seminar at the University of Göttingen (Germany). When I asked Bart to record this episode, we were in a campervan on a highway in Australia. It was the middle of the night, we were both sleep deprived and jetlagged, and he was having near death experiences because … well, I don’t usually drive on the wrong side of the road. These are the lengths we went for this podcast, so I hope you all appreciate this! further readings Most of what we discussed can be found in Daniel Kahneman’s famous book ‘Thinking Fast and Slow‘. I wikipedia-ed most of the topics for you: Heuristic Müller-Lyer Illusion Kanizsa’s Triangle Availability Heuristic Conjunction Fallacy (Linda Problem) Base rate fallacy Gambler’s Fallacy Prospect Theory Migration Crisis in Europe Correction: The unusual series of roulette results occurred in Monaco, not Macao.

Sep 2018

35 min 56 sec