* ⚡️ Feature: Lexer Interface cleanup (#14)
* LexerInterface
Defined the lexer interface
* Parser
- Fixed import for `Token` class
- Removed the token management fields such as `tokens`, `currentToken` and `tokenPtr` as these are now replaced by our `LexerInterface`, `lexer` field which manages this all for us
- Removed constructor which accepts a `Token[]`, now onyl accept a `LexerInterface`
- Removed `nextToken()`, `hasTokens()`, `getCurrentToken()`, `previousToken()`, `setCursor(ulong)` and `getCursor()`.
- The above now are called via the `lexer` instance
Parser (unit tests)
- Migrated to new `LexerInterface`+`BasicLexer` system
- Hoisted out common imports for unit tests into a `version(unittest)`
TypeChecker (unittests)
- Hoisted out common imports for unit tests into a `version(unittest)`
- Migrated to new `LexerInterface`+`BasicLexer` system
LexerInterface
- Moved to new `lexer.core` package
- Documented module and class
Commands
- Fixed imports for the (now) `BasicLexer`
- Fixed imports for the (now) `lexer.core` package
Compiler
- Fixed imports for the (now) `BasicLexer`
- Use `LexerInterface` instead of `Lexer`
- The `doLex()` method now uses an instance of `BasicLexer` and then downcasts to quickly call `performLex()` in order to tokenize and make them available
- The `doParse()` method now takes in an instance of `LexerInterface` rather than `Token[]`
BasicLexer (previously Lexer)
- Moved to the `lexer.kinds` package
- Now implements `LexerInterface`
- Documented module and class
- Documented the `LexerInterface` methods
Exceptions
- Moved to the `lexer.core` package
- Fixed import of `Token` class
- Now uses `LexerInterface`
Core.Lexer.Package
- Documented package module
Tokens
- Moved to the `lexer.core` package
- Documented module and class
Check
- Fixed import for `Token`
- Fixed import for `BasicLexer`
* `core.lexer` (package)
- Documented all public imports
* Exceptions
- Documented the module
- Documented `LexerError` and its members
- Documented `LexerException`, its members too
* Tokens
- Documented the fields (using proper syntax)
- Documented constructor and methods
* BasicLexer
- Removed now-completed TODO
- Added (for clarity) `override` keywords to the `getLine()` and `getColumn()` methods
- Moved `getLine()`, `getColumn()` and `getTokens()` altoghether
- Made `getTokens()` override-marked
- Documented `getTokens()`
* Check
- Removed weird TODO that makes no sense
- Documented some of the members of `SymbolType`
* Check
- Documented a few more enum members of `SymbolType`
- Fixed documentation (and added a TODO) for the `SymbolType.LE_SYMBOL`
* Check
- Documented a few more enum members of `SymbolType`
* Check
- Documented `isType(string)`
- Added a TODO for `isTYpe(string)` to "Check if below is even used
- Documented `isPathIdentifier(string)`
* Check
- Updated description of `isPathIdentifier(string)` to note it can contain underscores
- Documented isIdentifier(string)`
- Updated `SymbolType.IDENT_TYPE` to acknowledge underscores
- Documented `isAccessor(Token token)` and `isModifier(Token)`
* Check
- Documented `isIdentifier_NoDot(Token tokenIn)`, `isIdentifier_Dot(Token tokenIn)`, `isNumericLiteral(string token)`
- Removed uneeded import of `BasicLexer`
- Moved import to the top of file
* Check
- Documented `getSymbolType(Token tokenIn)`, `isMathOp(Token token)`, `isBinaryOp(Token token)`
* Check
- Documented the `symbols.check` module
* Builtins
- Properly documented `getBuiltInType(TypeChecker, string)`
* Builtins
- Documented module
* Typing (core)
- Documented module
- Documented all members
* Exceptions (lexer)
- Fixed documentation missing parameters
* Check
- Make comments docs/ddox compatible
* BasicLexer
- Fixed parameter name in documentation
* BasixLexer
- Fixed formatting in documentation for class
* Typing (core)
- Documented all remaining class members and fields