Up next:

Try reordering the clauses in TotalComparer's #recurse().

Maybe try adding some more tests of TotalComparer with types mixed together.

Add remaining tests for SpoofTuple's fromNonceTuples() and maybe its dependencies.

Put all instance members of SpoofTuple needed under test.

Experiment with displaying .with values in tests, maybe in a 
    new `State: ...` text segment, perhaps conditionally based 
    on a new .and value, or one in a new TestTuple property.

Look at adding .and values to TestTuple in a way that allows them to be 
    passed along with .on and .of in separate test-array objects, as well 
    as any other values that I may have missed when enhancing TestTuple.

Consider and experiment with ways to display spoofed elements 
    in a way that I like in test outputs.

Try changing private TestTuple fields to public fields / properties, 
    as I've done in SpoofTuple, to allow comparison in tests.

Try switching to nonce SpoofTuple equivalents for SMF and .plus, 
    or some renaming of / replacement for .plus.


Coming soon:

Once inlined spoofing is working and SpoofTuple's methods are passing 
     their tests, try out switching from nonces to SpoofTuples.

Once inlined spoofing is working, look at simplifying SOF's semantics, 
    or else switching to the semantics I was aiming toward last time.

Add a new temporary path in TestRunner to use the parallel spoofing, 
    or use a similar temporary experimental approach.

Choose a final-ish syntax for SOF members and maybe SMF members based 
    on their uses, maybe without poly-member spoofing at first.

Experiment with changing how some SpoofTuple-using system method excludes 
    nonce objects with .not defined, passing through the original if it is 
    ST-like but has .not defined, and otherwise replacing the original 
    with a SpoofTuple for use by SOF and SMF.

Try out SOF with another model class and experiment with refining it to return 
    instances of objects that have the first name in each syntax chain.

Maybe experiment with spoofing of multiple object / class members in SOF and SMF 
    based on .plus and .with, maybe gathering inbound spoof definitions, 
    with each spoofed type or object addressed collectively once.

Try out refactoring TestRunner to more classes, along with 
    de-crufting and any reorganizing.

See if I can find better names for SMF and SOF, maybe changing them 
    to SpoofClassMethodsFixture and SpoofInstanceMethodsFixture, 
    respectively, and change their tests / usages to match.

Experiment with adding Sets and maybe other types to TotalComparer under test.

Look at factoring TotalComparer again.

Move tests around so that SelfTests only contains direct tests of 
    test-system classes, while Tests contains all the indirect tests 
    of the system / style using topic code.

Try dog-food testing of restoreTargets() of SMF and 
    a new / finished restoreTargets() of SOF.

Try out testing of spoofing two or more classes with SMF at the 
    same time, to make sure SMF is handling its Map keys right.

Also try out testing of spoofing two or more objects with SOF at the 
    same time, to make sure SOF is also handling its Map keys right.

Maybe add a way to test for undefined properties directly, 
    maybe with an .and term plus an .out of "undefined".

Consider renaming the spoofing property for test tuples to .how, .as, or .while.

Maybe add non-dog-food testing of TestTuple and SpoofTuple, 
    if I continue to use the latter.

Maybe add some simple tests to check output from cross-recursive displaying, 
    with Maps and Objects with entries and members of the other type.

Maybe keep putting new code under test with an existing system, even if 
    I'm also dog-fooding it, or maybe just the key system parts.


Later on:

Consider using eval() with new tuple syntax for getting results in .from with less code.

Consider TestTuple members like `.as`, `.and`, and maybe `.see` (for displaying) 
    and any others I can think of, to support calculating values to compare, 
    to support a not-equal comparison directly or indirectly, to support switching 
    between comparing instance identities and contents, and maybe more.

Probably add a new test frame just for actuals that are throw results.

Probably factor the repeated elements of test frames to another method or class structure. 

Continue trying out things with model classes using TDD, to see how it goes.

Factor test frames to a new class or classes, with their repeated 
    internal code and dependencies also moved in some way, 
    to better support all the variation in an object manner.

Probably factor displaying in TerminalReporter to its own class or classes, 
    possibly using the Facade pattern.

Look at maybe letting throws propagate directly, instead of catching them, 
    or else some other kind of handling that's more informative than a 
    raw fail, such as custom-displaying the stack after a test fail.

Maybe experiment with a `.groups` member on tuples, which would contain an array 
    of grouping texts, which would be used to group tests below the class level, 
    and which probably would also restart values like classes and methods do.

Probably try a loop system based on classes plus methods plus any defined groups, 
    instead of the current interpretive grouping system.

Possibly add spots for arbitrary workings in the test frames, possibly to include 
    places just for test-isolation code, along with places for any workings.

Add unit tests for my new system using an older, established test framework, 
    possibly dog-fooding with my new system, and possibly working by TDD.

Maybe allow skipping tests' .with / .initors, and maybe other nonce-tuple 
    properties if possible, by setting default values if they are omitted.

Maybe rename "callers" to "runners" and the existing "runner" to something else.

Probably try adding file-watching and re-running the tests when files change.

Work out how to respond to the terminal to avoid re-runs from scratch.

Add regions, do any de-crufting, etc.

Perhaps add a web runner similar to what I did with earlier test frameworks.

Rename the code to a new, truly unique name.



+ To run a file from the `scripts` node in package.json,
    its content has to be "node ...file...", not just the file.

+ You can add arbitrary options, like "other", to the scripts node.

+ To run a script you have in scripts, you use `npm run [name]` in the terminal.

+ alert() isn't defined within node.js, but console.log() is.

+ You no longer need `esm` in NodeJs to use modules, but you still have to have
    `"type": "module"` in a package.json for the folders containing module JS files.

+ With `chalk`, you can use hex() and bgHex() to choose arbitrary colors,
    including better ones than many of chalk's predefined options.

+ With chalk, you can indent with "\t" + chalk.some-color("some-text")
    within console.log(), with any aliasing you've written, if you want.

+ To find out the number of columns in the terminal, you can address `process.stdout.columns`,
    and there are other useful properties on `process.stdout`, including `.rows`.

+ You can easily pad a string with `.padStart()` and `.padEnd()`.

+ There's no built-in way to automatically set leading zeroes on a number.

+ Class definitions are as scoped as anything else, so if you alter one in one context,
    for instance by changing a method's code, it isn't altered in other contexts, 
    though it does remain altered in its change context until you alter it back.


Dependencies for my dev environment and/or Kitten itself:

I may want to remove some of these over time, replacing them with custom code, even if it has fewer options.

✓ chalk — for log out colors — only needs built-in Node Modules
✓ mocha — for non-dog-fooding tests — may have auto-installed dependencies
✓ chai — for syntax in non-dog-fooding tests — may have auto-installed dependencies
✗ esm — for modules — no longer needed with Node 16+

Scratchpad for syntax ideas and concepts: