There will no doubt be some varied opinions about this but I'm the only one who is right. LOL!
If you think about it once a file that was written to work on the Einstein is loaded into Einstein memory it will work. So providing it worked either using DOS calls e.g. a vanilla CP/M program or just MOS calls or other programming tricks such as direct hardware access such as in a game it will still work once it is loaded and executed.
What really concerns us regarding EinsDein is the how and where those files are stored. File handling is a function of BDOS. BDOS processes DOS calls for creating,opening,reading,writing and deleting files. It is also responsible for processing the records of these files i.e pulling them from the storage device and writing them back to it. BDOS DOES NOT CARE WHAT THESE STORAGE DEVICES ARE because BDOS uses "Duck" processing - as long as the storage device behaves like a duck BDOS considers it to be a duck. However built within this highly democratic file system are some inherent limitations. CP/M can only address disks of 8Mb in size but it can address up to 16 of these as discrete drives e.g. A:-P: so in theory the file system can address 16 * 8 Mb = 128Mb and that's it. A paltry amount by todays standards but it was a colossal amount of memory back in the early 80's when a single 10Gb Hdd cost £500-1000. With a bit of tweaking and if we abandoned USER codes (primitive directories) then the number of drives could be increased to 32. However this in itself has ramifications. Each drive installed has a a drive descriptor of 16 bytes and a bit map allocation table. This maps the free blocks on a storage device that are allocated or not and the numbers of the allocated blocks are stored in the file directory entry which tells the OS which blocks belong to that file. Currently the Einstein OS uses 2kb allocation blocks for its files and each block is represented by one bit. A byte thus maps 16kb So each 8mb drive would require 8*1024kb / 16 = 512 bytes and 16 drives would thus require 8kb of allocation tables. These are stored in RAM not on the disk so you can see that we could quickly run out of space to run programs in and for each drive we added we would have to reduce the TPA (transient program area) by 512 bytes. We can increase allocation size up to 32kb per block but then a 1byte file would occupy 32kb of disk space and we are still not going to get a disk that's significantly larger, In reality we will want to address a single drive of around 32Gb in size without jumping through hoops - so it isn't hard to see that the Einstein OS's file system is hopelessly outdated for a modern size disk. This is why I am rewriting the OS to make it small enough and robust enough to handle modern sized disks. This is very tricky because it needs to be 32bit to address a big disk and because it has to be small and compact enough - ideally the same size that it already is and be backward compatible with all of the existing software. It's a tall order and it is going to take a long time .But there is one thing you need to know IT DOESN'T MATTER FOR NOW. Even if the the file system uses your interface in the traditional way and addressed say a couple of 8mb drives this would still be good enough and a vast improvement on anything that exists at the moment and has existed since Einstein hard drives disappeared 30 years ago.
What I recommend is this. The EinsDein hardware device should be programmed into a second ROM and 2 new MOS calls created to read/write a single 512 byte sector which mirror the existing ones for the FDD. The existing OS can then be very simply modified by selecting the device in BIOS via its drive number - reads/writes to EinsDein will be simply redirected transparently and to all intents and purposes it will be a large Duck.
This will then give us all the time in the world to develop a modernised OS.
There is a second approach. This is to use the native API which talks to the SD interface to simply save and load files - so it's a big storage depot. This means that the vanilla Einstein DOS doesn't know anything about EinsDein and can't write and read records directly through it but they can be modified once in memory and saved back.This could be easily achieved by transient programs or just added to the OS as additional commands with the code embedded.
Perhaps for now both methods could be used by partitioning part of the SD card as a drive the Einstein OS can use directly and using the remainder as a file storage depot.
Regarding literature it depends what you really want to know - Albert revealed is VG for hardware but CP/M calls are just like MSDOS calls and in fact MSDOS was cloned from CP/M. The DOS/MOS manual is very useful and is available from the Einstein reborn website free as a pdf. My list of MOS calls and their functions is in the files section of the Einstein User Group.
To truly understand the issues I'm talking about then you need to know how CP/M works under the hood and I can recommend CP/M revealed by Jack D. Denton and Soul of CP/M by Mitchell Waite and Robert Lafore. Remember that these are old texts and use old tools and 8080 assembler syntax not Z80. The main difference between CP/M and Einstein Xtal DOS is that Xtal is optimised for the Z80. Having disassembled it and compared it with CP/M 2.2 it seems to me that Xtal probably originally licenced the code from Digital Research and added their own improvements but perhaps Trevor Brownen of Xtal Research who is a forum member can comment on that.
Certainly learning about CP/M will stand you in good stead but it takes along time to become a proficient Z80 coder which is a constant balancing act between size, speed and efficiency and a lot longer to understand the issues surrounding an OS and backward compatibility I question if you really need to become an expert in this area or if your existing talents in electronics are better employed in a collaborative effort with myself and others.