Testing Reasoning Software. A Bayesian Way
Is it possible to supply strong empirical evidence for or against the efficacy of reasoning software? There is a paradox concerning tests of reasoning software. On the one hand, acceptance of such software is slow although overwhelming arguments speak for the use of such software packages. There see...
Saved in:
| Main Author: | |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Paderborn University: Media Systems and Media Organisation Research Group
2008-07-01
|
| Series: | tripleC: Communication, Capitalism & Critique |
| Subjects: | |
| Online Access: | https://www.triple-c.at/index.php/tripleC/article/view/50 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | Is it possible to supply strong empirical evidence for or against the efficacy of reasoning software? There is a paradox concerning tests of reasoning software. On the one hand, acceptance of such software is slow although overwhelming arguments speak for the use of such software packages. There seems to be room for skepticism among decision makers and stakeholders concerning its efficacy. On the other hand, teachersdevelopers of such software (the present author being one of them) think the effects of such software are obvious. In this paper, I will show that both positions – skepticism vs. belief in efficacy – can be compatible with evidence. This is the case if (1) the testing methods differ, (2) the facilities of observation differ and (3) tests rely on contextual assumptions. In particular, I will show that developers of reasoning software can, in principle, know the efficacy of certain design solutions (cf. van Gelder, 2000b, Suthers et al., 2003). Other decision makers may, however, be unable to establish evidence for efficacy. |
|---|---|
| ISSN: | 1726-670X |