LLI and HLC

So what are they?

LLI

LLI is the Low Level Interpreter, a light weight emulated asm lang, similar to uxn, it's about 600 lines of code all together, assembler and interpreter. The virtual system it emulates is 32 bit meaning it can in theory address up to 4Gb of ram, this is in contrast to uxn which is 16 bit and thus can only work on 64Kb of ram. Working on 32 bits did introduce challenges and made things more clunky at parts, requiring functions that operate on bytes, shorts, and ints. On some chunks of the language I decided to just scrap that and make the functions work with 32 bit values and force casts between them. This worked nicely.

The language is designed around memory mapped IO providing a high level abstraction, I implemented a system of 'cores', where the interpreter can have a core passed to it as a command line argument, and it will be mapped to a given address. As of writing I have only made 2 cores, the unix core and the time core. The Unix core provides a mapping to stdin, stdout, stderr; these let the user read/write a byte to one of these addresses, and it would read/write a byte from the file descriptors. I later added support for a small syscall interface, and writing to generic file descriptors.

The Time module was far simpler, just providing a set of addresses, one for the current year, on for the month, etc etc. Whenever they were read there values were updated with the current time.

HLC

HLC (High Level Compiler) is still in the works but I see it as a high level language that will compile down to LLI. I'm currently working on the lexer for it, I'm basing it of rob pike's lecture on the go template lexer. It can be found online easily, and its a good listen if you are at all interested in compilers. The general gist of it is that you make a function that we can call the start state, then it returns the lexical token, along side another function, this function is then the next state. The benefit of this is that you can follow different execution paths depending on previous values in the input, this leads to errors becoming far easier to handle.

That's as far as it has gotten so far, but I'm still working on it, once the lexer is finished I'll need to make a parse tree based the lexical tokens, although I'm considering using yacc/bison to handle this.