Thanks. I like algorithm questions.
I was surprised that my sample code for jbryant87 resulted in unintentional Unicode characters. So I offered him a second version that forced the argument, n, in CHR$(n) to be in the range of 0..255 (8-bit). This fixed his problem but bothered me as to why I needed to do this.
I came up with 2 competing theories:
- You guys had secretly added UTF-8 support to TBASIC. Modified CHR$(n) to accept 16-bit arguments and convert them to multiple UTF-8 bytes. Modified ASC(A$,i) and MID$ to recognize UTF-8 encoded characters and reconstruct the 16-bit values. And, were too modest to mention all of these improvements.
- The simulator was handling strings differently then real hardware.
I was getting so excited about the first theory and ignored the fact that the 2nd theory was much more plausible.
Gary D.