AutomatingScience
ThoughtStorms Wiki
Context : OnScience, ArtificialIntelligence
Has been going on since HerbertSimon.
User-interfaces to help scientists ...
- Federated Databases in Science : http://www.etymon.com/wordpress/?p=7
- UbiquitousInformation
Quora Answer : We had Aristotle, Bacon, and Newton. What's the next stage of development for the scientific method?
Barry Rountree has an interesting thought.
I'm not sure I'd characterize it as a step forward in scientific method. But certainly it's a gathering crisis in science. When science loses the authority that it has gained over the last 300 years or so. What happens then?
Leaving that aside, though. I'd say the next step forward in scientific method that we must confront is the coming of computation and AI to science.
This is going to make science weird in various ways.
Computers are starting to crunch data, build models / "theories" and extrapolate from those models to make new predictions. What science does and what Machine Learning does is basically the same thing. And soon we're going to start getting computers doing real and useful scientific research. By themselves.
The more troubling aspect is that the computers are going to come up with models that no human can grasp. One problem we already face is that even now a lot of science is so obscure and so complex that only a few highly trained, deeply read specialists can understand it at an intuitive level. Whatever the ideal of science as being verifiable by anyone, in practice, most people have to just trust the authority of the institutions. And this is already leading to the problems Barry points out.
But what will happen when literally no human understands the science, and we are obliged to accept that here is a computer that does have a valid model, despite none of us understanding it, and we must just keep trusting in the predictions we get out of it?
How will we know when and how it is appropriate to challenge the computers vs. accept what they tell us? How do we trust that they aren't compromised and corrupted by biased programmers or malware?
Increasingly this "computational" science is going to study complex systems like ecosystems, economies, human brains and psychology, epidemiology, models of materials etc. in simulation. These kinds of models of millions or billions of interacting components are going to be "chaos" ... or rather, "sensitive to initial conditions".
The models we build will be "concept demonstrators". And they will be able to tell us what particular "types" of ecosystem do. But they will not be plausible as models of any specific real system. For example, we may have amazing models of economies, but not an accurate model of "the British economy".
But if you can't say your model is of a specific, real thing. Or refers to a specific thing. Then how can that model be said to be making novel predictions about it? About the world? And if it can't make such predictions, then how can it still be "falsifiable"? And if it isn't falsifiable, if the predictions aren't in practice still testable, is it still science?
See also :
- AutomatingInvention
- Popper on TheLogicOfScientificDiscovery (PopperianEpistemology, NetworkEpistemology)
- DemarcatingScience
- ProductivityOfKnowledgeWork, AcademiaVsNewMedia
- (WarpLink) AcademiaVsNewMedia