I just tried to edit my first reply to clarify a few expressions, but apparently too late, so apologies if anything is a little confusing or unclear.
Hi there and thanks very much for your help, I'm a professional IT man but have never messed with stuff at bit level so forgive me if my questions seem a bit basic, my basic read code in c# is this
int BlockSize = 4;
int BytesRead = 0;
byte[] block = new Byte[BlockSize];
FileStream fs = new FileStream(filename, FileMode.Open);
BinaryReader br = new BinaryReader(fs);
// Read the file in 4 byte chunks
while ((block = br.ReadBytes(4)) != null)
{
BytesRead += BlockSize;
// on the first pass has block == "fLaC" as expected.
// the second has block[0] = 0 , block[1] = 0, block[2] = 0, block[3] = 34.
// if I apply your code
uint length = block[1]<<16 | block[2]<<8 | block[3];
// Unsurprisingly I get 34 which is the value I have in block[3] and the other elements are 0 - but interestingly
// 34 hex = 52 decimal which is the ASCII code for '4' which is the block type I'm looking for.
}
fs.Close();
In my ignorance of bit manipulation I'm guessing that on my second read the values of my byte array are interpreted thus
block[0] = 0 // which means this is not the last block.
block[1] + block[2] + block[3] = the block type, which in this case is 4 if my guess that the 34 value is in hex is correct.
How am I doing ?