If any of you have the time too answer a few questions, it'd fill a gap in my head!

I'm trying to understand Integer types. What is the difference between signed (eg. integer, shortint, smallint, longint, int64), and unsigned (eg. cardinal, byte, word, longword)?

Why can a 16-bit Word hold twice+1 as much as a 16-bit smallint?

I do realize that unsigned integer types can only be 0..whatever and not negative, how does a computer know if an integer is negative at binary level? My idea is that the first bit tells if it's positive or negative, thus explaining why unsigned integer types hold *2+1.

Thanks in advance!