>this is what creates a speed problem...esp. on the slow computers > >FOR Myloop=1 TO itemnumber > AZ$="" > FOR loop=1 TO 48 > AZ$=AZ$+CHR$(PEEK([StrHndl&]+gOffSet&)) > INC(gOffSet&) > NEXT > DIAGNOSIS$(Myloop)=AZ$ >NEXT > Terrald, First''' To addres your speed question, a bit of optimization will make a big difference. Try this. (Untested) FOR Myloop=1 TO itemnumber POKE @DIAGNOSIS$(Myloop),48 'Set length byte BLOCKMOVE [StrHndl&]+gOffSet&,@DIAGNOSIS$(Myloop)+1,48 'move string gOffSet& = gOffSet& + 48 'get offset of next string NEXT Even this could be tweaked a bit, but first see if it gets you within acceptable range. Second... Maybe I'm confused, but I'm not sure I'm clear as to what this loop is doing. If you are writing full arrays to disk and reading them back in, there is no need to add length bytes, which seems to be all this loop does. Why not just read the file into DIAGNOSIS$(0) and be done with it? READ FILE#FILENUMBER,@DIAGNOSIS$(0),Filesize& Or... Put your data into a handle as you have done, then use XREF@ to access it: DIM DIAGNOSIS& XREF@ DIAGNOSIS$(16000) itemnumber=LOF(FILENUMBER,50):Filesize&=LOF(FILENUMBER,1) DIAGNOSIS&=FN NEWHANDLE(Filesize&) OSErr=FN HNOPURGE(DIAGNOSIS&) 'I think you could dispense with this READ FILE#FILENUMBER,[DIAGNOSIS&],Filesize& Now you have your data in a _dynamic_ array DIAGNOSIS$() to use however you like. It occupies only the space of your data handle, with no duplication of data. To enlarge it, just change the handle size. Finally... I suspect some of your difficulties derive from miscalculating variable lengths. Remember that DIM 49 str$(items) sets up an array of strings at 50-byte intervals (the 49 chars + length byte). So if you have written this array to disk, you would want to calculate the number of items by dividing the file size by 50, and to address it as records, you need a record size of 50. Could this be the source of some of your problems? HTH 0"0 =J= a y "