Posted in Share the Love | 4 Comments » RSS 2.0
Apropos reundo etc.
Back in the 80s I ran a team doing AI research.
One of the features of our AI tool / inference engine
was belief-revision. If some of the assumptions made by the tool turned out to be false, then every deduction made from them had to be rolled back (recursively of course) then re-rolled forward from the new set of facts and assumptions.
Often a time consuming business. Similarly, the probabilities deduced during Bayesian reasoning need to be revised.
Doing all this in a real-time environment proved to overtax the HW available at the time (typically just a VAX).
Wrote a paper on it somewhere, but I can’t find it now to give you the lit ref
Fascinating stuff, Stu. I spent a lot of time on VAXes back in the 80s, too — but I wasn’t doing anything as fun as AI research. Wrote a couple of byte-code compilers/interpreters and a whole lot of accounting software.
What language(s) did you use for your inference engine?
If you look at any of the AI textbooks I wrote in the 80s, all the examples are in PROLOG. We even built a PROLOG coprocessor in HW, ran about 200 * faster than the SW interpreter in C.
PROLOG does Robinson Unification on Horn Clauses, so you can implement it as a stack machine, which is how Warren’s virtual machine worked. We also built a 64-parallel processor version which would run like greased lightning on the specially coded demos such as the patent-searcher. But everyday PROLOG programs only got about factor 6 or 7 parallelism, and the belief-revision code was (necessarily) mostly linear
Wow, that’s hard core geek. (That’s meant as a compliment).
Mail (will not be published) (required)
Notify me of followup comments via e-mail