theKernel 10.0 NHI1 - theKernel - theLink - theConfig - theSq3Lite - theCompiler - theBrain - theGuard
c - tcl - cs - py - rb - jv - cc
Loading...
Searching...
No Matches
HOWTO Micro-Kernel - the six Crisis

What is a crisis and how to solve ?

BASICS

The NHI1 project is a framework consisting of :

  • a programming model (managed object),
  • an application server (theLink),
  • a C language extension (the-new-C) and
  • an integrated connection to a large number of programming languages (C, C ++, C #, VB.NET, Java, Python, Ruby, Perl, PHP, Tcl or GO)

The initial problem was to design a language binding for an existing application server defined by a C header file.
On the one hand there are already known tools and on the other hand every programming language has its own tool box to solve the problem.
NONE of the known tools was able to solve the life-cycle problem in such a way that objects could be created and destroyed inside and outside the application server and at the same time keep the external and internal references always in sync.

‍this is also the reason why application servers are always developed and offered in ONE language.

#1 EXTENSION CRISIS - or why the "new-C" is required

The problem has always been to keep the code in sync, as each programming language has its own individual approach to extension programming, which ultimately results in a very small amount of shared code.
This causes huge programming effort and slows down the development of theKernel considerably.

‍I called this the EXTENSION CRISIS.

To solve this problem the following steps are done:

  1. the alc-compiler was developed : theCompiler
  2. the "C" language got an extension called : alc new C
  3. the "C" header file got an technology to add extra attributes to a definition : alc parser
  4. the code base for the extensions was synchronized and finally merged into a single programming model : HOWTO Micro-Kernel - the Internals

‍As result the ALC compiler ( All Languages Compiler ) was developed

The alc-compiler support two programming-models

  1. The C-model is for code with "STRUCTUREED" interface (C, C ++, C #, Java, and Go) and special syntax (VB and Perl)
  2. The S-model is for code with "SCRIPT-C" interface (tcl, python, php, ruby)

The C-model is more generalistic, can be used for a variety of tasks such as creating code for a wide variety of programming languages, but also configuration files or other structured data.
The S-model is a simplified form of the C-model and is easier to use because ONLY c-files will be generated which are adapted using the c-macro technology.

#2 LIFETIME CRISIS

The next serious problem was the different life-time and life-cycle of instances in the different programming languages. This is closely related to whether or not there is a garbage collector.

‍I called this the LIFETIME CRISIS.

The life-time and life-cycle of an instance varies from:

  • infinite, an instance is never deleted. (e.g. C, C++, Tcl)
  • immediately, (hard refCount) an instance is deleted immediately when the last reference has disappeared. (e.g. Python, C++ reference)
  • later, (soft refCount) an instance will be deleted at some point but it is difficult to predict when that will be. (e.g. Java, C#)

There are also aspects such as:

  • an instance is deleted but probably in a completely different thread.
  • an instance is never deleted but at the end of the program, at least sometimes.
  • an instance is actually NEVER deleted but it is OVERWRITTEN (deleted and newly created) if a NEW instance is created under the same NAME.

In order to combine all these imponderables, the MANAGED OBJECT Technology was developed and the STORAGE MANAGEMENT implemented.

#3 UPDATE CRISIS

The next serious crisis was ultimately due to the fact that resources (working time) are finite.
The more successful something becomes and the more extensions to a programming language have been implemented, the greater the effort to keep it completely in sync.
Every new feature in theKernel required a post-implementation in 10 other extensions along with the necessary tests.

‍I called it this UPDATE CRISIS.

In order not to endanger the whole development of LibMkKernel, the TOKEN STREAM COMPILER Technology was introduced.

Ultimately, something was created that reads the definitions of the leading project (theKernel) and then automatically implements it in the various programming languages.

#4 DOCUMENTATION CRISIS

The next serious crisis was ultimately based on the fact that a carefully maintained project should also be carefully documented.
Now it is difficult to find something that 11 programming languages are equally well documented, with the additional requirement that Close-to-Code should be documented.
Close-to-Code means that if the programmer detects a documentation bug, he should be able to fix this bug immediately and WITHOUT any auxiliary means.
The threshold should therefore be set very low in order to eliminate a documentation bug.

‍I called this the DOCUMENTATION CRISIS.

From the request, the TOKEN STREAM COMPILER was linked to the DOXYGEN, so that the following was achieved:

  • All languages that are supported by the TOKEN STREAM COMPILER can be documented by DOXYGEN
  • The DOXYGEN code for the documentation is generated by the TOKEN STREAM COMPILER in the same way as the code for the extensions.

#5 THIRD PARTY SOFTWARE CRISIS - or howto compile a "native-software" into a "managed-object-application"

It's easy to write a "C" extension like Meta-Object-Support when you have control over the source-code of the source library.
Unfortunately, this is NOT the norm. The normal is that the library is only defined via the header-file and all Meta-Object-Support have to be layered around the native-library.

‍I call this the THIRD PARTY SOFTWARE CRISIS

To solve the problem, the Native-Library-Importer was created.

  • A configuration file in the form of a C header file is used to convert the native-name-space into a managed-object-name-space.
  • A layer is added that generates a managed-object output from a native-library input.
  • This layer is kept very slim using only "C"-inline-code.

#6 TESTING CRISIS - or howto RPC reduce programming work by 90%

The next crisis was the result of success. After all problems have been eliminated and the connection to the target languages worked smoothly, all of this must be tested. It's hard to accept if you create a multi-language connection within 5 days and then need 5 weeks to write all the test cases to really test all functions in all languages supported.

‍I called this the TESTING CRISIS.

To solve this crisis the alc-RPC-backend was developed.

‍directly compile a managad-object-application into a rpc-server and/or rpc-client.

RPC means that every meta-library is automatically split into a client and a server side, and of course across different programming languages. The result is, for example, a client side in TCL that has the test code but the actual test is then automatically executed in the target language. It is therefore no longer necessary to create the test cases per language and this is replaced by ONE test case written in TCL.

                   LibMySuper.so          → library with managed-object support
                        |
                   alc-Compiler           → generate META code
                        |
              alc-Extension-Backend       → generate (Tcl,Perl,Python,Ruby,Php,C,C++,C+,VB.Net,Go,Java) extension
                        |
                 alc-RPC-Backend          → generate RPC-Code in (Tcl,Perl,Python,Ruby,Php,C,C++,C+,VB.Net,Go,Java)
                        |
          -----------------------------
          |                           |
          generate RPC-Client         generate RPC-Server
          |                           |
          LibMySuperClient.tcl        LibMySuperServer.(tcl|pl|py|rb|php|c|cc|cs|vb|go|java)
          |                           |
          - tcltest...   ->           test-server-function-XYZ
            |                         |
            tcltest-OK   <-           return-answer

The only thing the programmer has to write is the tcltest script to test a capability. Everything else is created automatically by the alc-RPC-backend.