was not expecting to find that in this spot....fucks that number mean tupac lol gave me 2 extra units before farm kicked on.

https://www.experts-exchange.com/questions/23061795/Binary-32767-32768.html

-1?

it is negative one lol,each townjall erases and also +1 so 2 extra units look at the cpu!!!

Maybe I can help with this. Here's some info. Theres actually a very simple method here

for working out signed integers, and, because I'm me (sorry) also a whole pile of related

info that you may or may not find useful.. or may already know.. do with it what you will.

-------------------------------------------------------------------------------------------

It really helps if you're trying to get your head around signed binaries to just ditch decimal.

Decimal if great when you're counting with 10 fingers, but pretty much blows for anything

binary based (i.e everything to do with compters).

Use Hexidecimal. The shit actually makes sense in hex.

BYTE:

Decimal: 0-255, 256 possible states.

**Hex: 00-FF 100 possible states**WORD (or SHORT):

Decimal: 0-65535, 65536 possible states

**Hex: 0000-FFFF, 10000 possible states**DWORD (doubleword):

Decimal: 0-4294967295, 4294967296 possible states

**Hex: 00000000-FFFFFFFF, 100000000 possible states**......etc

Clearly one of them is much easier to remember than the other.

If your value is being interpreted as a

*signed* number then yes,... blah blah blah

... bit 15 is being used as a sign bit (for words, or bit 31 for dwords.. etc) but then is it

"(n/2)-1 thru minus (n/2)"? or was it umm err plus one ... and the... um....

OMFG. No. Computers work with very simple concepts, we just make it difficult

because we have the wrong number of fingers for the job.

In HEX:

for a 16-bit ('SHORT' or 'WORD') signed integer, the value "-1" is FFFF

for a 32-bit (DWORD, doubleword) signed integer, the value "-1" is FFFFFFFF

for a 64-bit (QWORD, quadword ) signed integer, the value "-1" is FFFFFFFFFFFFFFFF

.......

(and yes if anybody actually used 8-bit signed binary ints it woud be FF)

they count backwards from there...

so for a 16-bit number:

-1 = FFFF

-2 = FFFE

-3 = FFFD

-4 = FFFC

-5 = FFFB

-6 = FFFA

-7 = FFF9

-8 = FFF8

-9 = FFF7 .... etc

Here's the good bit:

YOU REALLY DON'T HAVE TO UNDERSTAND HEX TO USE IT FOR THIS.

Just do this:

Pull up the windows "calc" calculator or anything else that will convert hex to decimal.

... for calc you need to select 'scientific' mode from the 'view' menu ... at least in the XP

version I have - probably still the same in others.

Work out how many bits ur using... most likely for WC2 it will be 16 bits,

perhaps 32 for some things.

Ok divide that by 4... easy

this is how many zeros you want.... as in the table above.

i.e. WORD - 10000 DWORD - 100000000

16 bits - 4 zeros

32 bits - 8 zeros

64 bits - 16 zeros ..... etc

Lets try a random example... say -7629, just for lols

so .... if you want to know how to represent the number -7629 in a 16 bit signed integer

DO THIS:

1 - first select HEX MODE

2 - THEN enter a 1 and 4 zeros "10000"

3 - then select DECIMAL mode

4 - then enter your number and press '='

....... so in this case press [-] [7] [6] [2] [9] [=]

theres your answer: "57907"

----------DONE---------

*Note: that's the [subtract] button, not the [+/-] sign*

button, you are just subtracting the absolute value.If you want you can convert it back to hex, then its: "E233"

In the actual CPU or RAM or on a bus or a disk drive then its

"1110001000110011"

These are all the same number, the only one that's real is the last one.

The other two are just ways funny hairless apes like to think about numbers

because it suits their funny brains.

**NERDS CORNER:**If, like me, you're a bit on the nerdy side, you may have wondered:

"if 57907 is 1110001000110011 and that actually equals -7629..

then how can we represent the

*actual* number 57907 as a

16-bit integer???"

Well, 57907 is processed as 1110001000110011, yes, it's exactly

the same number. How can this be? Well everything in a binary

machine is ones and zeros.... "1110001000110011" could also

represent the position of 8 black pixels in part of a monochrome

bitmap, virtually any other type of information that can be processed

by a computer. It's up to the program to know where it wants to get

the ones and zeros from,

*and what type of data* it is that it is getting

(then it has to decide what to do with it)... and thats the whole gig.

If its looking for a 16-bit WORD and it gets 1110001000110011 then it

says "Oh thats 57907". If its looking for a

*SIGNED* 16-bit integer

and it gets the same 1110001000110011 it says, "Oh thats -7629" - or

if its a bitmap it says "thats black - black - black - white - white - white -

black - white - .... etc"

**Carefull of this bit - maximum nerdage:**The actual CPU etc. doesnt say "oh thats..." anything at all, it just has the

ones and zeros. Its 1110001000110011 period. The system works that

way, because in a 16 bit CPU, adding 57907to a number gives you eaxtly the

same result as subtracting 7629. The answer is the same sequence of ones

and zeros. The CPU doesn't know or care if it is adding on 57907 or adding

on '-7629', the result is the same. In the end the

*program* knows if it wanted

a signed integer, an unsigned integer, black and white spots, a floating point

number or a string of neucleic amino acids, and it knows what it wants to do

with that info... because thats what a program is. The CPU couldn't care less.

Neither could RAM or a disk drive or whatever... which is the stuff you are

manipulating to make your patch. Obviously to manipulate the data, you need

to know what the program expects to find, and what it wants to do with that

data - only then can you know what to change in order to get the result you

want.

BTW: You may have realised by now that the internal graphics format you are

tweaking is an old Blizzard propriatory format called GRP. I think I gave a general

description in one of my posts about PCX palettes or there abouts.

*Oh, and one more thing....*I expect you should already know, but if you are directly manipulating data in a file

or volume with a hex editor, the you need to be aware that our funny human brains

are used to seeing numbers with the least signifigant value at the right..

i.e 539

5 = 500

3 = 30

9 = 9

Most data storage (and certainly all the stuff in wc2) is the other way around. To

further confuse the issue, hex editors have evolved to display data 8 bits at a time

which = 1 byte = 2 hex digits.... 00 thru FF.

A 16-bit number as we now know, has 4 hex digits - or 2 bytes. Most of the internal

values in the wc2 game engine are either 16-bit or 8-bit. However the exe is a 32-bit

PE file (M$ Portable Executable) so all of the values related to the exe as an object or

memory addressing etc. are all 32-bit values, which of course have 8 hex digits or 4

bytes.

The hex number 5D79 (dec 23929) is a number arranged exactly like a decimal number

except hex uses 16 digits (0123456789ABCDEF) instead of 10 digits (0123456789),

however the layout is the same - least signifigant on the right, each column to the left

having a value that is the equivalent exponent of the number base being used.

Our trusty hex editor - still presents everything as it was back when the world was made

of 8-bit numbers (flared trousers anyone?) The Byte. So every 8 bits was presented as

a hex number. Little digit on the right, big digit on the left.

But..... as we are directly manipulating the data as it exists, when we come to larger numbers

(16-bit...32-bit..etc.) we find that they are actually stored the other way around i.e. the with

the smallest part on the left through to the largest on the right, however, each pair of 2 hex

digits is still being 'conveniantly' presented to us as a 'correct' hex number... which is actually

backwards... SO

If we want to directly read or write data larger than 8-bits (1 byte) with a hex editor, we have

to rearrange it - but only in groups of 2 digits. On disk, our hex number 5D79 is actually stored

as 2 bytes like this 79:5D ... the order of each byte is reversed.

YES! Fellow nerdlings!... thats not true either we all know that its really just stored as a bunch

of ones and zeros... but thats the way a hex editor displays output and accepts input, so that's

what you will be using and seeing,... but really, its just another one of those kooky ways the

hairless apes decided to do things, because we just love to make life easy for ourselves.

The same principal applies to a 32-bit value.... take a random 7EE9D123. On disk or in memory,

we represent this as 23:D1:E9:7E ..... in practical terms split it into groups of 2 digits then

reverse the order.

7EE9D123

---> split into bytes

7E - E9 - D1 - 23

---> reverse the bytes

23 - D1 - E9 - 7E

^^^ And there we have it

Same for our 16-Bit value

5D79

--> split

5D - 79

--> reverse

79 - 5D

**BUT**

==>Remember to make sure you have the right number of digits to start with!

i.e. I want to write then number 2016 as a 32-Bit (DWORD) value on disk:

2016 decimal = 7E0 hexidecimal

but its 32-bit, so we want 8 digits = 000007E0 (just like 00002016 = 2016 )

now we have 8 hex digits we're good to go:

000007E0

00 - 00 - 07 - E0

E0 - 07 - 00 - 00

^^^^^^^^^^^^^

This is how a hex editor would display the number 2016 stored

on disk as a DWORD

....and if it isn't stating the blindingly obvious when you see something like:

0x6F34

the '0x' at the start just means "this is a hexidecimal number", because of

course '1234' is a perfectly valid hex number as well as a decimal number.

0x1234 = decimal 4660....... indeed 101110011 is a vaild hex number and

a valid decimal number and a valid binary nunber. Time to STFU before I start

talking about octal....