How far can multidimensional arrays go in PHP? -
I know that it is possible to have an array within an array like this.
Array01 ([0] => Array ([0] => 40292633 [1] => 412) [1] = & gt; Array ([0] = & Gt; 41785603 [1] => 382) [2] = Array ([0] => 48792980 [1] => 373) [3] => Array ([0] = > 44741143 [1] => 329)) Can you create an array in an array like this in an array? Array01 ([0] = & gt; array ([0] => array ([0] = 402262633 [1] = 1] 412) [1] = & Gt; Array ([0] => 41785603 [1] => 382)) [1] ([0] => 41785603 [1] => 382) [2] = & gt; Array ([0] => 48792980 [1] => 373) [3] = & gt; Hey ([0] => 44741143 [1] = & gt; 329 ))
I'm curious about how far it can go, how many arrays can you have?
OK, let's maximize calculate what are our maximum possible needs?
- We need to know how much storage / need to be stored in PHP.
- We have to know what kind of memory we can use (best Case scenario)
- Split the amount of memory from the amount of memory required to store a PHP array
How big is the PHP array? Okay, it's easy to work: PHP is open source, to know that the array is actually hashtables, let's look at the requirements of all bits and PHP:
typingfrattbank bucket {unsigned Long H; Unsigned int nKeyLength; Zero * pi; Zero * pDataPtr; Structure Bucket * pListNext; Structure Bucket * PLIist:; Structure bucket * pNext; Structure Bucket * Yellow; Const four * Arki; } Bucket; Typedef struct _hashtable {unsigned int nTableSize; Unsigned int nTableMask; Unsigned int nNumOfElements; Unsigned long nNextFreeElement; Bucket * pInternalPointer; Bucket * Pilisthard; Bucket * Palestist; Bucket ** billioncat; Zero * PDSTstrator; Short-term; Unsigned four nApplyCount; Small bApplyProtection; } Hash tables; Typeface Federation _zvalue_value {long height; Double-development; Structure {four * val; Int len; } Str; Hashtable * ht; // hashtable, it is used for the array blank * obj; // is actually a zend_object, but I believe it is typed for an indicator} zvalue_value; Typedef structure _zval_struct {zvalue_value value; Unsigned infraction_gc; Unsigned four types; Unsigned four is_ref__gc; } Zaval; Now, let's quickly get an idea of how many bytes it uses: int main (zero) {printf ("% zu" Bytes where the signals are in size% zu bytes \ n ", size (bucket) + size size (size) + size (hashtable), size (zero *) / size of index); Return 0; }
Now on a typical 32 bit system it tells us that the combined size of all structs is 96 bytes, and an indicator is 4 bytes larger. To use the array, the reason for this is that we should be an indicator for it, so that an array takes at least 100 bytes (96 bytes + 4 bytes for an indicator in the array). What does this tell us? Well, if an indicator takes 4 bytes, then we know how many different pointers are: 2 ^ (4 * 8) => 2 ^ 32, this can allow us to address maximum 42 9 4967296 bytes. Of course, a pointer uses 32 bytes, so we can decide that the total number of pointers is actually 2 ^ 2/32, which is 134217728. Similarly, to know how many PHP arrays can store this memory, we need 2 ^ 32/100, which gives us 42 9 49672.96 array, but 0.96 is not an array (this is not enough ), So we can not use that bit. 42 9 4 672 In theory, we can create as many arrays.
Note: Output not on a 64 bit machine is only two times the 32 bit machine, it "168 bytes where the signal is 8 bytes" if You want to know what the maximum array is on 64 bit platforms, then math (2 ^ 64/174) ...
Is that correct? No, not at all. I have not taken other overhead into the all account, but it's safe to say that you can make, if you think the 100-dimensional array looks crazy though, by reaching a value, The more apathy, the popular look and the meaning is, as a result, there will be a lot of performance, so Kiss (that is, keep it simple and sensible).
Comments
Post a Comment