Anton Kolonin, June 10 1998
Basically, there are two sources of knowledge
Actually, there is four ways to obtain knowledge
Data mining (pure facts analysis)
Human input (handmade distilled knowledge definition)
Synthesis (producing distilled knowledge from itself)
Exchange (import-export of pure facts or distilled knowledge)
Pure facts are
Textual or hypertextual data (HTML, TXT, etc.)
Relational data (SQL, DBF, etc.)
Data stored in spreadsheets (Excel, Lotus, etc.)
Objects in memory (Smalltalk, C++, Java)
Distilled knowledge is
Lists of properties or links (Lisp)
Classes (Smalltalk, C++, Java)
Memes (resent AI approaches)
There are two known approaches to express knowledge links
Two-valued scales. It means that either one piece of knowledge is connected to another or not. This is realized in most of programming languages.
Multi-valued scales. It means that connection between pieces of knowledge is expressed by some kind of weight factor. This is realized in some of smooth-logic based systems.
Let us consider that multi-valued smooth-logic approach is better. This way, we have to care about weight factor support. If we decide to implement it just by real number than we face some problems.
Number-based weight will not work properly in completely distributed system. The cause is that the same piece of knowledge might be used on any number of remote computers on the net, so all of them have to block one another to manage weight factor value.
Even if we decide to wave straightforward "distributed knowledge" approach and concentrate weight factor management on responsible network nodes, there is still a danger that weight factor value could be changed directly by some part of system out of context and actual cardinality of present links.
ORL programming language provides another approach to present "weights". In terms of ORL, "distilled knowledge" means "class". Class is presented dually
Data structure that contains piece of knowledge
Any number of facts (objects) that realize this piece of knowledge (class)
The trick is that class (knowledge) is alive while objects (facts) are present. When there's no objects, class is ready to disappear. Much more, no one can just remove or add some object (fact) to present certain class (knowledge). One have to prove the consistence of object with other related objects to add it, or to keep the integrity of related objects removing any number of them. This way, weight factor is not maintained artificially, but is managed automatically just in terms of correct adding or removing of facts. The said remains correct for any distributed environment.
Why is it available in ORL only?
Prolog and Lisp present distilled knowledge management features only
SQL presents pure facts management only (indeed, DDL subset of SQL is hardly usable dynamically)
C++ and Java present classes and objects both but there is no easy way to create or drop classes on fly
The key feature of ORL is broken wall between object and class.
Since today, you can realize and implement, that
- Class is just regular object of class "class"
- Knowledge is just some kind of fact
Indeed, ORL engine starts from two objects. Those objects are
- Class "object" inherited by class "class"
- Class "class" with two instantiated class objects
So, it is possible to use ORL-driven MDL language to
- Store pure facts (using any regular database)
- Store distilled knowledge (using any regular database)
- Produce knowledge from facts (analysis)
- Produce facts from knowledge (synthesis)
It is important that ORL does not mean certain kind of data storage. As SQL, ORL means just data definition and data manipulation interface between any client and any server (between application and database or between two remote applications). Not like SQL, ORL provides
- Object-oriented semantics and features
- Complete set of methods to access classes as regular objects
Any names or products referenced are trademarks or registered trademarks of their respective owners.
Object-Relational Language is trademark of ProPro Group