I suppose the more serious answer, is that its important to have some idea of the accuracy of your measurements (the size of your error bars) and that you only make conclusions that would fall into the space bounded by those error bars.
So, it is possible that an improvement in experiment leads to smaller error bars such that an old model/theory/conclusion no longer lies within them and a newer improved theory needs to be devised… and so progresses science… (Importantly the existence of this new theory does not mean the old theory is rubbish to be discarded, only that it is not as good as the newer one. For example Newton’s gravity is still fine if you want to predict the path of a canon ball near to the Earth… but you really need Einstein and relativity if your canon ball makes it close to Mercury/the Sun)
Hi rhiannay101
In clinical trials we use actual data to base our conclusions on. That can only be as accurate as the patients and doctors who provided it, but in most cases this is accurate data.
Having said that, in clinical trials medicines are only tested on a specific set of patients, so we can’t make totally accurate conclusions about how a medicine will work if everybody was to take it, so it is a bit of a calculated risk too.
Comments