The previous texture_memory_alloc.hpp was written based on an
incorrect understanding of the "32-bit" and "64-bit" texture memory
address mapping.
The primary motivation is to rearrange the texture memory address map
so that "textures" (64-bit access) do not overlap with 32-bit
accesses, such as REGION_BASE or PARAM_BASE.
It is unclear why this reordering is useful--it should be the case
that the previous content of the cache is invalidated on write
regardless of when cache::init is called.
This makes it possible to change the serial baud rate without
uploading a new serial transfer program. I'm not sure how useful this
will be, but it is simple enough to add.
The client program is also substantially improved. Sincerely I do not
understand how/why this works. Experimentally, I found that feeding
the ft232h data in chunks of up to roughly 384 bytes works reliably,
both for reads and writes. Larger chunk sizes are (as expected)
faster, but the tranfers do not appear to be consistently correct in
this case.
I have no logical explanation for this. The size of the ft232h FIFO is
1K each for the transmit and receive buffer respectively.
This also enables RTS/CTS hardware flow control. Surprisingly, this
doesn't appear to affect reliability significantly.
This client-side code alone improves the typical transfer speed of a
51572-byte file from 91.85 seconds to 25.89 seconds.
This appears to be heavily bottlenecked by the python side of the
transfer--increasing the serial line speed has nearly zero affect on
the total transfer time.
The serial_transfer loader, as long as the target program voluntarily
terminates itself at some point, is able to load multiple programs
consecutively without requiring a physical power cycle to reload the
transfer program from CD.
The current example.mk juggles between two different "memory layouts",
one for "burn to a physical CD" and another for "load via serial
cable". Because the serial_transfer program now relocates itself to
the end of system memory, this means the 0x8c010000 area is now usable
by programs that are loaded by serial_transfer.
This still has issues, notably:
Despite the first 16kbytes of audio being loaded prior to starting the
AICA ARM7 CPU, the GDROM drive returns "busy" for the following
~48kbytes. This in turn causes the AICA to play audio from
uninitialized memory.
There is also a separate issue where the timing of changing the start
address of the audio channel causes a faint popping sound throughout
the audio playback.
I should do more timing experiments with the GDROM drive, and improve
this example to play the audio with fewer artifacts.
This combines my iso9660 parsing code, with all of the prior gdrom packet
interface / command code.
The example, on real Dreamcast hardware, displays the first 2048 bytes [1] of every
file in the root directory on the serial console.
[1] or the size of the file, whichever is smaller
After thinking about this more, I realized it is probably never useful, and
certainly completely incorrect in all of the cases it was still being used in
the examples.
Necessarily, this means that dma_start must now know what the size of the
response is, so that it can issue the appropriate number of ocbp instructions.
This also cleans up the inconsistent _command_buf and _recieve_buf declarations.