Jump to content

2019 Skyrim LE Stability Guide


mrsrt

Recommended Posts

30 minutes ago, mrsrt said:

 

 Feel free to publish your guide or give a link to a good one about this, i'll make a section for it. 
 

what's the point to copy paste texture optimiser, smco or others mods description or readme somewhere else than on these mods page?

Link to comment
1 hour ago, yatol said:

in the step they worship, there's a tool, when i took a look, more ram than what i have..

The tool gives all of your VRAM plus a percentage of your RAM as a total. Windows does exactly the same for 'available video memory' except Windows uses a different percentage of RAM and thus gives a different total than the vramtool thing.

I would guess that the tool interrogates the hardware rather than pluck a figure from thin air. The tool has been around for years so I would have thought that by now someone would have noticed that it didn't work. Does it not come from the ENB dev site rather being something that S.T.E.P. conjured up?

Link to comment
4 minutes ago, yatol said:

what's the point to copy paste texture optimiser, smco or others mods description or readme somewhere else than on these mods page?

Actually, i meant i will leave the link with a little description about what is this and why it should be done. Point is to make this guide more completed.

Link to comment

Results of quick research about 4gb question.

- By default skyrim definitely can use up to 4gb (LAA flagged). For some reason my skyrim didn't reach values higher than 2G, seems coincidence.

- LAA flag does not work on Windows 7 x64 due to Data Execution Prevention (DEP). To be able to use up to 4G DEP must be disabled.

- Ntcore's 4G patch does not only flags an executable. It also make some internal changes:

7e5e8c754b3f105c274476cca6c8.png

I didn't figured what exactly this change does, feel free to continue if you want.

 

Default header

Spoiler

4D5A90000300000004000000FFFF0000B80000000000000040000000000000000000000000000000000000000000000000000000000000000000000030010000564C56000100000000EE120134194B510FF0E958196B6D9555850775A2E82BAEF21C819CAB790109A54791E5B21F9E6CAD43AFBD2E05954A54D7069810A579F78C36F11ED8AB5DF9F3580C7EF7BB08A94F95D98B936CADA24462CCE4872E4E94995087C1286FAE5D644E579FB8F39D9FCDABFE6B9FFEF942F93DC07CF08C39681C47229378CCBB6537E180A38073F25F000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000504500004C010600E57C43510000000000000000E00022010B0109000094C60000C2430000000000A168B5000010000000B0C60000004000001000000002000005000000000000000500000000000000008098010004000000000000020000800000100000100000000010000010000000000000100000000000000000000000588EE3004001000000507C01407B0B000000000000000000000000000000000000D0870118980D00E06AC7001C000000000000000000000000000000000000002820DB0018000000D81FDB0040000000000000000000000000B0C600640400000000000000000000000000000000000000000000000000002E7465787400000000A0C600001000000094C60000040000000000000000000000000000200000602E7264617461000000001D0000B0C60000F81C000098C600000000000000000000000000400000402E646174610000000070980000B0E30000760A000090E300000000000000000000000000400000C02E746C73000000000030000000207C01002800000006EE00000000000000000000000000400000C02E7273726300000000800B0000507C01007C0B00002EEE00000000000000000000000000400000402E72656C6F630000C2AE100000D0870100B0100000AAF90000000000000000000000000040000042000000000000000000000000000000000000000000000000

 

Patched header

Spoiler

4D5A90000300000004000000FFFF0000B80000000000000040000000000000000000000000000000000000000000000000000000000000000000000030010000564C56000100000000EE120134194B510FF0E958196B6D9555850775A2E82BAEF21C819CAB790109A54791E5B21F9E6CAD43AFBD2E05954A54D7069810A579F78C36F11ED8AB5DF9F3580C7EF7BB08A94F95D98B936CADA24462CCE4872E4E94995087C1286FAE5D644E579FB8F39D9FCDABFE6B9FFEF942F93DC07CF08C39681C47229378CCBB6537E180A38073F25F000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000504500004C010600E57C43510000000000000000E00022010B0109000094C60000C2430000000000A168B5000010000000B0C600000040000010000000020000050000000000000005000000000000000080980100040000AF630A01020000800000100000100000000010000010000000000000100000000000000000000000588EE3004001000000507C01407B0B000000000000000000000000000000000000D0870118980D00E06AC7001C000000000000000000000000000000000000002820DB0018000000D81FDB0040000000000000000000000000B0C600640400000000000000000000000000000000000000000000000000002E7465787400000000A0C600001000000094C60000040000000000000000000000000000200000602E7264617461000000001D0000B0C60000F81C000098C600000000000000000000000000400000402E646174610000000070980000B0E30000760A000090E300000000000000000000000000400000C02E746C73000000000030000000207C01002800000006EE00000000000000000000000000400000C02E7273726300000000800B0000507C01007C0B00002EEE00000000000000000000000000400000402E72656C6F630000C2AE100000D0870100B0100000AAF900000000000000000000000000400000420000000000000000

 

Patch place: AF630A0102

LAA flag place: E00022010

 

For now it's enough information to say that 4gb patch is not required for skyrim. Can that additional binary change prevents ctds? Probably not. Just in case of some miracle, I'll test my skyrim for several hours with unpatched exe to be sure that ctds will not return and then remove the section.

Link to comment
4 hours ago, mrsrt said:

...

Make correct test with your game. Use my mods for generate stacks dumps and test your game with diferent values of iMaxAllocatedMemoryBytes.

 

I make exactly the same in the two test:

The log file with your recomended value of 64 MB have a size of 635 MB in 11 minutes.

The log file with the recomended value of 150 kb have a size of 100 MB in 12 minutes.

 

Papyrus_logs.rar

 

The game with the recomended value of 150 kb have less freeze and the game is responsive and i can end stoping the test after 10 minutes.

The game with your recomended value of 64 MB have much more freeze and go unresponsive in the minute 6 or 7 and i must close the game without stop the tests.

Link to comment
1 hour ago, GenioMaestro said:

Make correct test with your game. Use my mods for generate stacks dumps and test your game with diferent values of iMaxAllocatedMemoryBytes.

 

I make exactly the same in the two test:

The log file with your recomended value of 64 MB have a size of 635 MB in 11 minutes.

The log file with the recomended value of 150 kb have a size of 100 MB in 12 minutes.

 

Papyrus_logs.rar 7.03 MB · 1 download

 

The game with the recomended value of 150 kb have less freeze and the game is responsive and i can end stoping the test after 10 minutes.

The game with your recomended value of 64 MB have much more freeze and go unresponsive in the minute 6 or 7 and i must close the game without stop the tests.

Sorry, but... What exactly did you do? How many threads was launched? How much memory each thread consumes? Was the load increased gradually? What did you do when possible limit was reached? How did you estimate Papyrus performance? 

Link to comment
2 hours ago, Grey Cloud said:

The tool has been around for years so I would have thought that by now someone would have noticed that it didn't work. Does it not come from the ENB dev site rather being something that S.T.E.P. conjured up?

and nobody have notice "works slower than physical vram"

190823015749290121.jpg

i close something, tool now find 10656, and i now have 2650 free ram

10656-2650 = 8006

that thing want me to put vram + free ram in that ini setting?

 

https://forums.tomshardware.com/threads/share-system-ram-with-my-gpu-vram.2125595/

Quote

So with this we can get a rough calculation of the bandwidth of the two RAM types in use.

Your system RAM has a max bandwidth of:
2200 x 128/8 = 35.2GBps

Your GPU RAM max bandwidth:
6000 x 256/8 =192GBps

So your GPU RAM transfers 540% more data than your system RAM. On top of that your system RAM has other demands on it such as the needs of the CPU, and there are other factors too. Long story short, if your GPU was using that RAM instead, your GPU performance would be limited to about the same as a GTX 730.

 

190823024754595985.jpg

i need 5.6 gb of ram for whiterun, that become 4.3 gb in my vram (dds in ram have full resolution and mipmaps, dds in vram is either full resolution either a mipmap, you need more ram than vram)

barely use half my vram, don't look like there's a need to allow gpu to use ram

what about ram?

8-5.6-0.8(skyrim.exe)-1 (windows) = 600

600 mb left for trying new mods?

190823031006447027.jpg

can't allow gpu to use some of that ram, if the bullshit that don't make much sense isn't bullshit

https://forums.tomshardware.com/threads/gpu-not-using-vram.3275508/

Quote
Unless you're using integrated graphics, VRAM and system RAM have nothing to do with each other. The system cannot touch VRAM, and the GPU cannot touch system RAM (nor the pagefile). VRAM is used solely for video framebuffers, models, and textures used in the rendering process. Mostly textures. A framebuffer is only a few MB, as are most models. A single 4k texture is around 85 MB (4096x4096 x 4 bytes per pixel x 1.33 for lower-res MIP maps). Each notch you drop down in texture quality cuts VRAM used by that texture to 1/4. So a 2k texture is about 21 MB. A 1k texture about 5.3 MB. More if you've enabled anisotropic filtering.

https://en.wikipedia.org/wiki/Mipmap
https://en.wikipedia.org/wiki/Anisotropic_filtering#/media/File:MipMap_Example_STS101_Anisotropic.png

If the benchmark isn't designed to use many 4k textures or a ton of lower-res textures, VRAM use will not be very high. You're better off checking VRAM usage in a game, as that will usually result in (a lot) more textures being loaded than a benchmark.
 

it's not about skyrim, but it's the same

 

with enbhost.exe, skyrim.exe become smaller in my ram, because enbhost take the loading of textures in the ram from skyrim.exe (without enb, the 5.6 gb of texture above, that can't happen, you ram ctd before that)

nothing to do with vram

 

 

there's no need for enbhost.exe to load the textures from ram into vram, gpu already do that when you don't have enb, it can also do it when you have enb

the enb files, it's just your cpu loading enb shadow after skyrim.exe shadow, using one of them for the next frame, and sending that to the gpu, that do its stuff

 

have some doubts on that setting having anything to do with vram, or enbhost.exe allowing gpu to use ram instead of vram to crush your performance without reason

Link to comment
1 hour ago, mrsrt said:

Sorry, but... What exactly did you do? How many threads was launched? How much memory each thread consumes? Was the load increased gradually? What did you do when possible limit was reached? How did you estimate Papyrus performance? 

The things that i do. Download my mods, read my blog, read my post and make test with your game.

 

 

For the people that can not open or examine the files i make a small extract of the important info:

Spoiler

LineFilter2 v1.1
============================================================
File searched          :Papyrus.1_150kb.log
Searched for           :"VM is"
Wildcard search        :no
Upper/lower case       :Ignore
#lines with hit        :29
#lines searched        :844435
Time elapsed           :7 sec
============================================================
[   391] |[08/23/2019 - 01:48:18AM] VM is freezing...
[   392] |[08/23/2019 - 01:48:18AM] VM is frozen
[   742] |[08/23/2019 - 01:48:27AM] VM is thawing...
350 lines

 

[  5827] |[08/23/2019 - 01:49:28AM] VM is freezing...
[  5828] |[08/23/2019 - 01:49:28AM] VM is frozen
[ 10171] |[08/23/2019 - 01:49:28AM] VM is thawing...
4343 lines

 

[ 80300] |[08/23/2019 - 01:50:36AM] VM is freezing...
[ 80301] |[08/23/2019 - 01:50:36AM] VM is frozen
[137782] |[08/23/2019 - 01:50:37AM] VM is thawing...
57k lines

 

[210238] |[08/23/2019 - 01:51:55AM] VM is freezing...
[210239] |[08/23/2019 - 01:51:55AM] VM is frozen
[231157] |[08/23/2019 - 01:51:55AM] VM is thawing...
111k lines

 

[307649] |[08/23/2019 - 01:53:55AM] VM is freezing...
[307650] |[08/23/2019 - 01:53:55AM] VM is frozen
[361879] |[08/23/2019 - 01:53:56AM] VM is thawing...
54k lines

 

[445834] |[08/23/2019 - 01:55:16AM] VM is freezing...
[445835] |[08/23/2019 - 01:55:16AM] VM is frozen
[539213] |[08/23/2019 - 01:55:16AM] VM is thawing...
94k lines

 

[606470] |[08/23/2019 - 01:56:23AM] VM is freezing...
[606471] |[08/23/2019 - 01:56:23AM] VM is frozen
[625813] |[08/23/2019 - 01:56:23AM] VM is thawing...
19k lines

 

[710621] |[08/23/2019 - 01:57:46AM] VM is freezing...
[710622] |[08/23/2019 - 01:57:46AM] VM is frozen
[756871] |[08/23/2019 - 01:57:46AM] VM is thawing...
46k lines

 

[825436] |[08/23/2019 - 01:59:01AM] VM is freezing...
[825437] |[08/23/2019 - 01:59:01AM] VM is frozen
[843369] |[08/23/2019 - 01:59:01AM] VM is thawing...
18k lines

Spoiler

LineFilter2 v1.1
============================================================
File searched          :Papyrus.0_64mb - copia.log
Searched for           :"VM is"
Wildcard search        :no
Upper/lower case       :Ignore
#lines with hit        :21
#lines searched        :6335417
Time elapsed           :44 sec
============================================================
[    391] |[08/23/2019 - 01:36:12AM] VM is freezing...
[    392] |[08/23/2019 - 01:36:12AM] VM is frozen
[    742] |[08/23/2019 - 01:36:21AM] VM is thawing...
350 lines

 

[   7536] |[08/23/2019 - 01:37:25AM] VM is freezing...
[   7537] |[08/23/2019 - 01:37:25AM] VM is frozen
[  12372] |[08/23/2019 - 01:37:25AM] VM is thawing...
48k lines

 

[  89957] |[08/23/2019 - 01:38:34AM] VM is freezing...
[  89958] |[08/23/2019 - 01:38:34AM] VM is frozen
[ 197814] |[08/23/2019 - 01:38:35AM] VM is thawing...
108k lines

 

[ 283312] |[08/23/2019 - 01:39:52AM] VM is freezing...
[ 283314] |[08/23/2019 - 01:39:52AM] VM is frozen
[ 553707] |[08/23/2019 - 01:39:54AM] VM is thawing...
270k lines

 

[ 616229] |[08/23/2019 - 01:41:06AM] VM is freezing...
[ 616230] |[08/23/2019 - 01:41:06AM] VM is frozen
[1670664] |[08/23/2019 - 01:41:10AM] VM is thawing...
1 millon 54k lines

 

[1709520] |[08/23/2019 - 01:42:42AM] VM is freezing...
[1709523] |[08/23/2019 - 01:42:42AM] VM is frozen
[3524350] |[08/23/2019 - 01:42:50AM] VM is thawing...
1 millon 816k lines

 

[3551029] |[08/23/2019 - 01:44:34AM] VM is freezing...
[3551030] |[08/23/2019 - 01:44:34AM] VM is frozen
[6308056] |[08/23/2019 - 01:44:46AM] VM is thawing...
2 millons 757k lines

 

Every time the log register a Stack Dump write "VM is freezing..." and "VM is frozen" when start the Stack Dump and write "VM is thawing..." when the Stack Dump end.

 

You can make a simple search of "VM is" and you can find the start and the end of the Stack Dump.

Ussing LineFilter2 v1.1 in Notepad++ whit a reduced version of the big log i can show the important data.

 

In the log that use the recomended value of 150 kb the biggest Stack Dump have 111k lines and the next Stack Dump have less lines.

In the log that use the incorrect value of 64 MB the size of the Stack Dump's increase every time a Stack Dump is generated.

 

Exactly that make your incorrect value of 64 MB in iMaxAllocatedMemoryBytes and that match perfectly with the alert that i show in my first message. When the Web page of the CK show an alert trust it because is real.

Link to comment
1 hour ago, GenioMaestro said:

 

 

In the log that use the recomended value of 150 kb the biggest Stack Dump have 111k lines and the next Stack Dump have less lines.

 

why is your test 7 seconds for default setting, and 44 second for not vanilla setting?

 

#lines searched        :844435
Time elapsed           :7 sec

#lines searched        :6335417
Time elapsed           :44 sec

44/7 = 6.28

844435*6.28=5 303 051

 

so the game dump scripts when ram allocatate to them get full, not because it's in x bits (like esp in 8 bit so no more than 255)

and?

more ram setting allow skyrim.exe to load and dump more scripts, doubt it's because default setting didn't used the last 256 kb, so you can load a few more with bigger max, it's probably faster because it stop less often to dump that on your ssd that is slower than your ram

shouldn't there be less scripts with bigger max? where are the performances problems? freeze? or ctd?

 

instead of looking at the result of the problem, why not look at the problem?

it's not that hard to find the scripts that are load for nothing

condition actor have bow equipped for auto unequip ammo, easy to no longer load those scripts for nothing

you just have to do the same to hundred of scripts (too bad unof patch don't take care of the ones from skyrim.esm)

Link to comment
16 hours ago, GenioMaestro said:

The things that i do. Download my mods, read my blog, read my post and make test with your game.

 

 

For the people that can not open or examine the files i make a small extract of the important info:

  Reveal hidden contents

LineFilter2 v1.1
============================================================
File searched          :Papyrus.1_150kb.log
Searched for           :"VM is"
Wildcard search        :no
Upper/lower case       :Ignore
#lines with hit        :29
#lines searched        :844435
Time elapsed           :7 sec
============================================================
[   391] |[08/23/2019 - 01:48:18AM] VM is freezing...
[   392] |[08/23/2019 - 01:48:18AM] VM is frozen
[   742] |[08/23/2019 - 01:48:27AM] VM is thawing...
350 lines

 

[  5827] |[08/23/2019 - 01:49:28AM] VM is freezing...
[  5828] |[08/23/2019 - 01:49:28AM] VM is frozen
[ 10171] |[08/23/2019 - 01:49:28AM] VM is thawing...
4343 lines

 

[ 80300] |[08/23/2019 - 01:50:36AM] VM is freezing...
[ 80301] |[08/23/2019 - 01:50:36AM] VM is frozen
[137782] |[08/23/2019 - 01:50:37AM] VM is thawing...
57k lines

 

[210238] |[08/23/2019 - 01:51:55AM] VM is freezing...
[210239] |[08/23/2019 - 01:51:55AM] VM is frozen
[231157] |[08/23/2019 - 01:51:55AM] VM is thawing...
111k lines

 

[307649] |[08/23/2019 - 01:53:55AM] VM is freezing...
[307650] |[08/23/2019 - 01:53:55AM] VM is frozen
[361879] |[08/23/2019 - 01:53:56AM] VM is thawing...
54k lines

 

[445834] |[08/23/2019 - 01:55:16AM] VM is freezing...
[445835] |[08/23/2019 - 01:55:16AM] VM is frozen
[539213] |[08/23/2019 - 01:55:16AM] VM is thawing...
94k lines

 

[606470] |[08/23/2019 - 01:56:23AM] VM is freezing...
[606471] |[08/23/2019 - 01:56:23AM] VM is frozen
[625813] |[08/23/2019 - 01:56:23AM] VM is thawing...
19k lines

 

[710621] |[08/23/2019 - 01:57:46AM] VM is freezing...
[710622] |[08/23/2019 - 01:57:46AM] VM is frozen
[756871] |[08/23/2019 - 01:57:46AM] VM is thawing...
46k lines

 

[825436] |[08/23/2019 - 01:59:01AM] VM is freezing...
[825437] |[08/23/2019 - 01:59:01AM] VM is frozen
[843369] |[08/23/2019 - 01:59:01AM] VM is thawing...
18k lines

  Reveal hidden contents

LineFilter2 v1.1
============================================================
File searched          :Papyrus.0_64mb - copia.log
Searched for           :"VM is"
Wildcard search        :no
Upper/lower case       :Ignore
#lines with hit        :21
#lines searched        :6335417
Time elapsed           :44 sec
============================================================
[    391] |[08/23/2019 - 01:36:12AM] VM is freezing...
[    392] |[08/23/2019 - 01:36:12AM] VM is frozen
[    742] |[08/23/2019 - 01:36:21AM] VM is thawing...
350 lines

 

[   7536] |[08/23/2019 - 01:37:25AM] VM is freezing...
[   7537] |[08/23/2019 - 01:37:25AM] VM is frozen
[  12372] |[08/23/2019 - 01:37:25AM] VM is thawing...
48k lines

 

[  89957] |[08/23/2019 - 01:38:34AM] VM is freezing...
[  89958] |[08/23/2019 - 01:38:34AM] VM is frozen
[ 197814] |[08/23/2019 - 01:38:35AM] VM is thawing...
108k lines

 

[ 283312] |[08/23/2019 - 01:39:52AM] VM is freezing...
[ 283314] |[08/23/2019 - 01:39:52AM] VM is frozen
[ 553707] |[08/23/2019 - 01:39:54AM] VM is thawing...
270k lines

 

[ 616229] |[08/23/2019 - 01:41:06AM] VM is freezing...
[ 616230] |[08/23/2019 - 01:41:06AM] VM is frozen
[1670664] |[08/23/2019 - 01:41:10AM] VM is thawing...
1 millon 54k lines

 

[1709520] |[08/23/2019 - 01:42:42AM] VM is freezing...
[1709523] |[08/23/2019 - 01:42:42AM] VM is frozen
[3524350] |[08/23/2019 - 01:42:50AM] VM is thawing...
1 millon 816k lines

 

[3551029] |[08/23/2019 - 01:44:34AM] VM is freezing...
[3551030] |[08/23/2019 - 01:44:34AM] VM is frozen
[6308056] |[08/23/2019 - 01:44:46AM] VM is thawing...
2 millons 757k lines

 

Every time the log register a Stack Dump write "VM is freezing..." and "VM is frozen" when start the Stack Dump and write "VM is thawing..." when the Stack Dump end.

 

You can make a simple search of "VM is" and you can find the start and the end of the Stack Dump.

Ussing LineFilter2 v1.1 in Notepad++ whit a reduced version of the big log i can show the important data.

 

In the log that use the recomended value of 150 kb the biggest Stack Dump have 111k lines and the next Stack Dump have less lines.

In the log that use the incorrect value of 64 MB the size of the Stack Dump's increase every time a Stack Dump is generated.

 

Exactly that make your incorrect value of 64 MB in iMaxAllocatedMemoryBytes and that match perfectly with the alert that i show in my first message. When the Web page of the CK show an alert trust it because is real.

Okay, finally i got some time to take a look on it. Actually, you have done a good research but made wrong findings. Let me explain what happened in both tests.

At first, you should understand that dumping stack shows one working thread. Furthermore, you can always print them manually without papyrus stressing with "dps" console command. When you use it dumping stacks will be shown in your log. 

Since you didn't explain your test details, according to log I guess you took this https://www.loverslab.com/files/file/8779-multi_cloak_counter/ and start stressing your papyrus. 

 

Test 1.

The test has been started at 01:49:10AM. At 01:49:28AM (after 18 seconds) you got warning "Suspended stack count is over our warning threshold" that also printed your dumping stacks with purpose for understanding why it happened. This message contained 2833 dumping stacks, what means your papyrus was handling 2833 active threads while warining appeared. Process of collecting thread dumps starts from the main game thread which also responsible for rendering. Since you had worked 2833 threads this process took decent amount of CPU time. And because this time was taken from the render thread it means you got a freeze for exactly that amount of time due to inability to process the next frame. Papyrus printed stuck dumps 8 times in your log what says you had experienced 8 freezes in this test.

 

Test 2.

This test was started at 01:37:04AM. At 01:37:25AM (21 seconds) you got "Suspended stack count is over our warning threshold" with already 377983 active threads. It is 133 times more than in the first test.

What these results can say? 

1) With 64 mb of allowed memory your papyrus was able to handle 133x more threads before start panicking compared to 150kb. 

2) Warning messages appeared almost after the same time in both situations (18s vs 21s) , what also says that papyrus was handling threads much faster in the second test (2833 per 18s, 377983 per 21s).

3) You should have render freezes for much bigger amount of time because your papyrus was needed to print 378k threads, what is absolutely correct because there's linear time-complexity dependency.

 

What these findings can say about real Papyrus performance? 

At first, I rly didn't expect that papyrus can work with that amount of threads. It means Papyrus is a well done multithreaded platform if configured correctly. 

Also, situations that happened in both tests cannot happen while normal gameplay. No mod should produce that stress unless a mod has serious problems. It means we must not receive papyrus warning with thread dumps and its freezes with correct game. However, it still can happen, especially if we have several heavy scripts working, and as results show, if we increase iMaxAllocatedMemoryBytes Papyrus will process more and better. 

 

I don't think any more proofs needed to confirm that increasing iMaxAllocatedMemoryBytes with adequate values is stable and helpful option.

 

 

Link to comment

Well, at least you open the files and look it. I go to use common sense following your words:

18 hours ago, mrsrt said:

No mod should produce that stress unless a mod has serious problems.

Normally, the game only has a few running scripts and we shouldn't have Stack Dump.
Those few scripts only occupy a very small part of the memory and, of course, fit perfectly in 150 kb, even fit in 75 kb, and the rest of those k'as, normally, are not used.
Therefore, setting the value of iMaxAllocatedMemoryBytes to 64 MB means that, under normal circumstances, 63 MB and 900 k'as of memory are being wasted.

 

When we have a mod whit "serious problems" the default values allow the game continue working whitout CTD or big freezes and the script system works, more slow, but works.

When we have iMaxAllocatedMemoryBytes in incorrect values the game can make CTD or freeze for a lot of seconds while generating Stacks Dumps of millons of lines and the script system stop works.

 

 

Aditionally, the thing that you call thread are scripts. The threads are a totall diferent thing.

And I not know how are you making yours numbers but not match mine.

The Stack Dump generated at 01:49:28AM have 220 scripts.

The Stack Dump generated at 01:37:25AM have 264 scripts.

What are you counting for get 2833 and 377983.

 

19 hours ago, mrsrt said:

At first, I rly didn't expect that papyrus can work with that amount of threads. It means Papyrus is a well done multithreaded platform if configured correctly. 

Seems that you are learning some things.

 

Really, the game can run hundreds and hundreds of scripts at the same time and you not notice it. The capacity of the Script Engine is tremendous. Every time you load a savegame hundreds of scripts are executed.

I make my mods for demostrate that capacity and explain to the people that the "script heavy mods" not exist.

I have all the called "script heavy mods" installed and running in my game whitout any problem.

We only can have problems when a bad developed mod start generating thousand and thousand of scripts.

 

 

You seem be very happy thinking that all the scripts are executed inmediatelly when enter in the Script Engine. But that is totally false. Have thousand of active scripts not mean they are executed. Really, only a very small number of scripts are executed in one round. The rest are waiting for be procesed.

 

You can know how many scripts has gained execution status looking the Instruction Point in the Stack Dump:

Spoiler

[08/23/2019 - 01:49:28AM] Dumping stack 15761:
[08/23/2019 - 01:49:28AM]     Frame count: 1 (Page count: 1)
[08/23/2019 - 01:49:28AM]     State: Waiting on other stack for call (Freeze state: Freezing)
[08/23/2019 - 01:49:28AM]     Type: Normal
[08/23/2019 - 01:49:28AM]     Return register: None
[08/23/2019 - 01:49:28AM]     Has stack callback: No
[08/23/2019 - 01:49:28AM]     Stack trace:
[08/23/2019 - 01:49:28AM]         [None].Multi_CFEffectCreatureApply_script.OnEffectFinish() - "Multi_CFEffectCreatureApply_script.psc" Line 26
[08/23/2019 - 01:49:28AM]             IP: 405    Instruction: 9    Line: 26
[08/23/2019 - 01:49:28AM]             [akTarget]: [TrainerGoldScript < (FF0017A4)>]
[08/23/2019 - 01:49:28AM]             [akCaster]: [Actor < (00000014)>]
[08/23/2019 - 01:49:28AM]             [::temp7]: "Multi_CFEffectCreatureApply_script OnEffectFinish Target:[TrainerGoldScript < (FF0017A4)>] name:Aela the Huntress"
[08/23/2019 - 01:49:28AM]             [::temp8]: [ActorBase < (0001A696)>]
[08/23/2019 - 01:49:28AM]             [::temp9]: "Aela the Huntress"
[08/23/2019 - 01:49:28AM]             [::NoneVar]: None
[08/23/2019 - 01:49:28AM]             [::temp10]: -1
[08/23/2019 - 01:49:28AM]             [::temp11]: -1.000000
[08/23/2019 - 01:49:28AM] Dumping stack 15805:

If the script not have Instruction Point is because that script never get execution status and normally have (requested call) at the end of the script name.

Spoiler

[08/23/2019 - 01:50:36AM] Dumping stack 110428:
[08/23/2019 - 01:50:36AM]     Frame count: 0 (Page count: 0)
[08/23/2019 - 01:50:36AM]     State: Running (Freeze state: Freezing)
[08/23/2019 - 01:50:36AM]     Type: Normal
[08/23/2019 - 01:50:36AM]     Return register: None
[08/23/2019 - 01:50:36AM]     Has stack callback: No
[08/23/2019 - 01:50:36AM]     Stack trace:
[08/23/2019 - 01:50:36AM]         [Active effect 5 on  (FF00179F)].ST_Ability_Eff.OnMagicEffectApply() - (requested call)
[08/23/2019 - 01:50:36AM]             [param 0]: [Actor < (00000014)>]
[08/23/2019 - 01:50:36AM]             [param 1]: [MagicEffect < (1E0028B4)>]
[08/23/2019 - 01:50:36AM] Dumping stack 110429:

 

I copy 3 Stacks Dumps from each file and make some search for you can see some important data:

Spoiler

image.png.37fe15ac686a56eadcb78b002d7f7202.png

The files 1,2,3 come from the log of 100mb and the 4,5,6 come from the log of 635mb.

 

File 1: 220 scripts and 0 requested call = 220 active

File 2: 4090 scripts and 3372 requested call = 718 active

File 3: 597 scripts and 111 requested call = 456 active

 

File 4: 264 scripts and 0 requested call = 264 active

File 5: 10636 scripts and 10503 requested call = 133 active

File 6: 26758 scripts and 26570 requested call = 188 active

 

As you can see, the number of scripts that have gained execution status is very low. But the recomended parameter of 150 kb allow execute more of them in overcharged situation while the incorrect value of 64 MB have less scripts in execution.

 

The only real way for execute more script or execute it more fast is give more time to the Script Engine or up the framerate. But change the iMaxAllocatedMemoryBytes only increase the problems caused by a bad script.

Link to comment
7 minutes ago, GenioMaestro said:

 

File 1: 220 scripts and 0 requested call = 220 active

File 2: 4090 scripts and 3372 requested call = 718 active

File 3: 597 scripts and 111 requested call = 456 active

 

File 4: 264 scripts and 0 requested call = 264 active

File 5: 10636 scripts and 10503 requested call = 133 active

File 6: 26758 scripts and 26570 requested call = 188 active

 

As you can see, the number of scripts that have gained execution status is very low. But the recomended parameter of 150 kb allow execute more of them in overcharged situation while the incorrect value of 64 MB have less scripts in execution.

 

and why do you have more active scripts, if you use the same stack dump spell or whatever, only editing memory allocated to those scripts?

didn't look at how many scripts i had in those dumps, useless to check that

but if there's x scripts to load entering whiterun, if you just mess around with ini settings, it will still be x scripts to load entering whiterun again

if those comparaison tests weren't done in the same conditions... they are useless

 

how many scripts the game can run? doesn't matter, it's how fast it can run them that matter

what can those tests do about that?

 

purpose of my stack dump spell? finding everything with onlocationchange, on itemadd or other event that occur too often, to kill most of them (by giving conditions to stuff that load them, to stop loading them when they can't do anything)

a difference in game? was able to do everything in tamriel on that save, checking logs size everytime i close the game, only find the mess i look for with stack dump spell, if there's mess i didn't look for left to fix, didn't want to miss it

 

i enter a mod house, there's mannequins in there.... wops, i don't use vanilla femalehead.nif, so i have what they call brown head bug

nothing hard to patch, remaking those heads in crap kit? that take too much time, i just renumber the mod mannequin to vanilla mannequin

game put xxyyyyyy there, that was put in the save, so i look for it in savegamecleaner, to delete it (and it will becone 00zzzzzz reloading the save)

it's that or reloading earlier save before those mannequin were load (when were they load?)

 

while i was in savagamecleaner, a small look at active scripts, updateoffset, because why not

in there, there was a mod lantern script, to turn them on or off, that was still there, to turn on or off lanterns that aren't there

a script from a cave near winterhold to move some bugs around

a script from a draugr crypt near solide to don't remember

....

 

useless stuff that wasn't unload leaving the cell where they got load to do whatever, that also got deleted (it will come back if i enter that stuff cell again, but i was lazy... too lazy to add event onunload to those scripts)

how much it take to load 800 scripts, it's useless to look for that

removing 400 scripts that can't do anything from those 800 scripts, that, that's not useless

Link to comment
21 minutes ago, GenioMaestro said:

Normally, the game only has a few running scripts and we shouldn't have Stack Dump.
Those few scripts only occupy a very small part of the memory and, of course, fit perfectly in 150 kb, even fit in 75 kb, and the rest of those k'as, normally, are not used.
Therefore, setting the value of iMaxAllocatedMemoryBytes to 64 MB means that, under normal circumstances, 63 MB and 900 k'as of memory are being wasted.

 

When we have a mod whit "serious problems" the default values allow the game continue working whitout CTD or big freezes and the script system works, more slow, but works.

When we have iMaxAllocatedMemoryBytes in incorrect values the game can make CTD or freeze for a lot of seconds while generating Stacks Dumps of millons of lines and the script system stop works.

You have to understand, when Papyrus prints dump stucks it means Papyrus cannot work correctly any further, your script part of Skyrim is broken and you cannot play the game normally anymore, no matter how many dump stucks will be printed in your log. 

If you're looking for Skyrim that will work with mods that have memory and/or thread leaks, I agree, maybe it would be better to leave as less memory for this trash as possible and play the game without scripts. But what an idiotic way it could be?

You have to understand another thing, allocated Papyrus memory does not affect on amount of max working threads (and stuck dumps you will get) directly. You can get 2000 dumps with 150kb meanwhile get only 20 dumps with 64mb. It is completely dependent on threads memory consumption and complexity. For example, I got about 120 dump stucks with 64mb that helped me to detect a leak with PSQ. You can even test it yourself. 

However, 64mb option for sure will be able to handle many more the same threads compared to 150kb. It means, if you have a heavy scripted skyrim, and these scripts does not have any leaks, with 64mb Papyrus will be able to process more and faster. Also, you ultimately may meet a situation where 150kb is not enough and you will get warning with dumps, meanwhile 64mb would carry the situation easily. 

This question is completely the same as if you have 8 GB RAM on your PC, some program has a memory leak and you claim that to have 32 GB is worse, because emergency memory dump with 8 GB is creating much faster than with 32GB.

 

Quote

63 MB and 900 k'as of memory are being wasted.

Let's make a little math. For example, my skyrim now consumes about 3.5GB RAM. It is 3584 MB. It means 64 MB is ~1.8% of total consumed memory. Does it worth to make scripts work better? In 2019 I think it is. 

Furthermore, look carefuly at this option: iMaxAllocatedMemoryBytes. The option adjusts maximum memory, that can be allocated for Papyrus and not determinated total allocation. It means that it's not guaranteed Papyrus will take whole 64mb. Allocation happens with pages that also can be configured with iMaxMemoryPageSize and iMinMemoryPageSize. It means Papyrus will consume a required value but not more than you set in iMaxAllocatedMemoryBytes.

 

Quote

Aditionally, the thing that you call thread are scripts. The threads are a totall diferent thing.

Start from here, please https://www.creationkit.com/index.php?title=Threading_Notes_(Papyrus)

Also, you can continue here https://en.wikipedia.org/wiki/Runtime_system

Here https://en.wikipedia.org/wiki/Virtual_machine

And here https://en.wikipedia.org/wiki/Application_virtualization

 

1 hour ago, GenioMaestro said:

Aditionally, the thing that you call thread are scripts. The threads are a totall diferent thing.

And I not know how are you making yours numbers but not match mine.

The Stack Dump generated at 01:49:28AM have 220 scripts.

The Stack Dump generated at 01:37:25AM have 264 scripts.

What are you counting for get 2833 and 377983.

Oh yes, here I made a little mistake, forgot that I took counts from last dumps in both logs. Take a look at 01:57:46AM and 01:46:27AM. It doesn't change much, only started per second threads from begining. 

 

1 hour ago, GenioMaestro said:

You seem be very happy thinking that all the scripts are executed inmediatelly when enter in the Script Engine. But that is totally false. Have thousand of active scripts not mean they are executed. Really, only a very small number of scripts are executed in one round. The rest are waiting for be procesed.

Of course, it is, actually, you have discovered the thread pool design pattern, lol. This logic based on a thread executor and active threads are holding in its thread pool. Executor moves context between threads, dynamically executes parts of code to make threads work parallel. If every thread from pool will work simultaneously, I'd just say it will be extremely slow. I can talk about this more, but it's already pretty well explained here https://en.wikipedia.org/wiki/Thread_pool

I was surprised that Papyrus thread pool able to hold that much threads, nothing more. However, if you have several thousands of threads in that pool and all these threads require execution context it will take up to several seconds or even minutes to last tasks has been finished. 

If you're insterested in this you can also read more about scheduled thread pools https://en.wikipedia.org/wiki/Scheduling_(computing) and RegisterForUpdate papyrus functions.

 

2 hours ago, GenioMaestro said:

File 1: 220 scripts and 0 requested call = 220 active

File 2: 4090 scripts and 3372 requested call = 718 active

File 3: 597 scripts and 111 requested call = 456 active

 

File 4: 264 scripts and 0 requested call = 264 active

File 5: 10636 scripts and 10503 requested call = 133 active

File 6: 26758 scripts and 26570 requested call = 188 active

 

As you can see, the number of scripts that have gained execution status is very low. But the recomended parameter of 150 kb allow execute more of them in overcharged situation while the incorrect value of 64 MB have less scripts in execution.

Very good research and wrong findings, again. Let's look carefully at these results.

First dumps for both situations bring almost the same results. 220 and 264 working threads, 0 awaits. What these results can say? That, with normal load Papyrus works with threads the same way no matter how much memory was given. 

In next results we see that 64mb Papyrus holds many more threads meanwhile actually work less of them. Why it happens I explained above. The more threads in the thread pool, the less executor pefrormance. Everytime executor needs to move context it must decide where to move it between several thousands of threads. And in result less scripts work simultaneously. 

As I mentioned above, papyrus memory does not affect on amount of holding threads directly, it's just memory. Even one thread may fill whole 64mb, so in reality you will never meet a situation where several thousands of threads await for executor context. You should also understand, your tests do not show the real picture of heavy scripted skyrim, you just tested Papyrus concurrency. 

If you want to see the real picture you should do several tests with changing context as it always required for researches like you try to make. For example:

- Gradual memory increasing with one thread and several threads

- Executing functions of varying complexity with internal and native calls where the function processing time is catching 

- Catching the time when a new thread was launched, started and finished with various load

And many other.

Actually, all these researches was done years ago for the design pattern was used for Papyrus. You can also read them on wiki and many specialized forums where you'll find all the answers for your questions. 

Link to comment

Oh look, it's this thread again.

 

Dont mind me, just passing by to leave a few things:

 

1.- A common misconception about stack dumps is that somehow they are "an error in the game" and means "the game stopped working" which is a complete lie. Stack dumps are a warning in the log that means the game's script engine got overloaded, and many scripts over the warning threshold, got queued.

2.- Queued scripts will run, whether it's in the next frame, a few seconds later after a mini freeze, or 10 minutes later because the game is overloaded beyond reasonable paramaters. Queued scripts is an intentional designer feature of the engine to make sure all scripts are run from start to end no matter what. I dont remember the exact page that says that, but it's somewhere out there.

3.- The real danger here, is script accumulation. When the game gets overloaded with scripts, scripts get queued. If the game's overload situation does not stop, script accumulation can be progressive at an exponential rate, and you'll go from having a single stack dump because the game couldn't handle 220 scripts over the threshold, to a complete inability to reduce the load becaue they keep piling up infinitely untill you have 10 million scripts queued. In game, the result is the complete collapse of anything scripted that will now take over 10 minutes to see any results.

4.- If the memory value tests are showing script accumulation, well, that's a reason for concern. If the ini parameters can't handle the load, it's a timebomb that will explode whenever a stressful situation meets a script overload that can't be stopped. It's arguable whether this is fatal or not, considering the game's capacity to escape script spamming, say "moving to an empty cell and waiting for the script storm to pass" But nevertheless, a "script storm" is not normal and certainly not ideal as it disrupts playability. Sometimes, it's only temporary, but when a bad mod is causing it, there is a chance it becomes exponential and never stops.

5.- Another very real danger of script accumulation is that variables will be read at wrong timing. Say I asked for "GetAnimationVariableBool bAllowRotation" during a power attack, which should be 1 during a time frame of a power attack. If script lag due to overload exists, this question will take longer, and by the time it is asked and answered, the time frame will be long passed and the answer will be 0. Now lets imagine something more important, like a quest depends on a properly timed script line, then we fail to get the proper timing, and scenes, events and everything could be stuck, and we get all kinds of weird in the game.

6.-There is no concensus on what "script heavy" even means. Some people believe it is "many scripts" Well, the game has thousands of scripts on its own already. I used to think it meant "scripts that are running during the entire game" say with a looping OnUpdate event, but I was wrong, since the game doesnt care about background running scripts. It's not "a large script" either, since the game is perfectly capable of running several instances of scripts that have over 1000 lines without any problems. Some scripts are longer than entire book sagas and they still get run harmlessly. I see an interesting point above about "functions cost" which may perhaps be onto something, but I find that idea still too vague, since every script would have to measure its "heavyness" in function of the total sum of costs of every process in it, and how and when every process is run, and there is no way anybody sane would try to categorize mods by measuring their entire mechanics in that way. To me, the closest thing I have encountered to "heavy script" mods, is mods that spam events/functions. The spamming can potentially generate script accumulation, and when prolongued in time, it may lead to the aforementioned issues. And in this notion, it's not necessary that a mod has a very complex structure to do that. A single script with a few lines is more than capable of spamming on its own.

 

Now for the remaining points:

 

7.- Unrelated to the previous but "adding mods to the end of the load order" is not good advice at all. The game doesnt care if you change load order or added mods at the beginning, the middle or the end of the list. It only cares about a CORRECT load order. If one mod should go before another, then any smart user should use LOOT and/or manual ordering to make sure their list is COHERENT with the records their mods are editing. When in doubt, use TES5Edit which can very easily spot overwritten records and help users decide which mod should overwrite what or even fix incompatibilities for them. Adding mods "to the end of the load order" is often nothing more than a recipe for disaster, since some mods actually need to be loaded before others to work, and that means necessarily putting them somewhere in the middle.

8.- I recently learned another interesing thing about plugins. If on one hand, plugin order is irrelevant, except in terms of compatibility and correct order, there is at least one instance where mods can potentially be harmful even after removal. And that is if they edit records that will keep being present even if the delinquent mod is gone. Say a vanilla quest depends on a global, but a mod altered the value of that global, one can remove the bad mod, but then the global will still exist and will have the wrong value, then the game quest that uses it, will not work. Such cases do exist and are likely the one instance where a mod can actually "break" the game. Any other mod adding/removal/moving operation is completely safe, provided one actually knows what one is doing (In consideration of correct load order, orphan script cleanup and record alteration).

Link to comment

 

18 minutes ago, Myst42 said:

1.- A common misconception about stack dumps is that somehow they are "an error in the game" and means "the game stopped working" which is a complete lie. Stack dumps are a warning in the log that means the game's script engine got overloaded, and many scripts over the warning threshold, got queued.

Correct. In the thread we are talking about this warning: "Suspended stack count is over our warning threshold", that also prints dump stucks. However, accordingly to your words below, your script part of the game will be broken from the point where you got the warning. 

 

Quote

2.- Queued scripts will run, whether it's in the next frame, a few seconds later after a mini freeze, or 10 minutes later because the game is overloaded beyond reasonable paramaters. Queued scripts is an intentional designer feature of the engine to make sure all scripts are run from start to end no matter what. I dont remember the exact page that says that, but it's somewhere out there.

 

Quote

3.- The real danger here, is script accumulation. When the game gets overloaded with scripts, scripts get queued. If the game's overload situation does not stop, script accumulation can be progressive at an exponential rate, and you'll go from having a single stack dump because the game couldn't handle 220 scripts over the threshold, to a complete inability to reduce the load becaue they keep piling up infinitely untill you have 10 million scripts queued. In game, the result is the complete collapse of anything scripted that will now take over 10 minutes to see any results.

Correct. I left a detailed explanation of how and why it happens in the my previous post.

 

Quote

4.- If the memory value tests are showing script accumulation, well, that's a reason for concern. If the ini parameters can't handle the load, it's a timebomb that will explode whenever a stressful situation meets a script overload that can't be stopped. It's arguable whether this is fatal or not, considering the game's capacity to escape script spamming, say "moving to an empty cell and waiting for the script storm to pass" But nevertheless, a "script storm" is not normal and certainly not ideal as it disrupts playability. Sometimes, it's only temporary, but when a bad mod is causing it, there is a chance it becomes exponential and never stops.

Partially correct. Accumulation happens with no matter how much memory you gave to Papyrus. The only difference is that the more memory Papyrus have, the more (or larger) threads it will be able to hold in thread pool. But, as i explained above, it is the same as you have 8GB in your system and some bad program leaks only for 7GB, but when you have 32GB program leaks for 31GB. The problem not in memory size, the problem is the leak.

 

Quote

5.- Another very real danger of script accumulation is that variables will be read at wrong timing. Say I asked for "GetAnimationVariableBool bAllowRotation" during a power attack, which should be 1 during a time frame of a power attack. If script lag due to overload exists, this question will take longer, and by the time it is asked and answered, the time frame will be long passed and the answer will be 0. Now lets imagine something more important, like a quest depends on a properly timed script line, then we fail to get the proper timing, and scenes, events and everything could be stuck, and we get all kinds of weird in the game.

Partially correct. Actually, it's a well known programming issue and programming languages have their own solutions to solve it. Usually, it is Locks https://en.wikipedia.org/wiki/Lock_(computer_science) Also, it's developers zone of responsibility to carry situations like this. As I know, Papyrus does not have special operators to process lock mechanisms, however, you can always write a solution yourself, like an example here https://www.creationkit.com/index.php?title=Threading_Notes_(Papyrus) (ctrl+f "locks"). 

In other words, that situation may happen only with a script with unsafe non-concurrent architecture. But it also does not change that if Papyrus works properly it may pass. 

 

Quote

6.-There is no concensus on what "script heavy" even means. Some people believe it is "many scripts" Well, the game has thousands of scripts on its own already. I used to think it meant "scripts that are running during the entire game" say with a looping OnUpdate event, but I was wrong, since the game doesnt care about background running scripts. It's not "a large script" either, since the game is perfectly capable of running several instances of scripts that have over 1000 lines without any problems. Some scripts are longer than entire book sagas and they still get run harmlessly. I see an interesting point above about "functions cost" which may perhaps be onto something, but I find that idea still too vague, since every script would have to measure its "heavyness" in function of the total sum of costs of every process in it, and how and when every process is run, and there is no way anybody sane would try to categorize mods by measuring their entire mechanics in that way. To me, the closest thing I have encountered to "heavy script" mods, is mods that spam events/functions. The spamming can potentially generate script accumulation, and when prolongued in time, it may lead to the aforementioned issues. And in this notion, it's not necessary that a mod has a very complex structure to do that. A single script with a few lines is more than capable of spamming on its own.

Incorrect. 

At first, every function consumes CPU time (maybe unless the function is empty). The time depends on function complexity. And complexity depends on what you do and call from that functions. 

Let's take a look on example:

function func()
  int i = 10
endfunction

One line, 2 operations:
1) variable creation

2) variable initialization with 10

Both operations are very simple and will take only several nanoseconds to be completed. In total both of them will take not more than a millisecond, so the cost of this function about 1ms.

 

And now look here:

function func()
  quest chargenquest = Game.GetFormFromFile(0x000DAF, "Alternate Start - Live Another Life.esp") as quest
  chargenquest.stop()
endfunction

This function already much complicated.

1) Creation of variable with Quest type, several nanoseconds

2) GetFormFromFile call, it is a very comlicated function and process in the native part of the game. At first, engine needs to find a proper esp and it compares entered string to every mod name you have, until proper will be found. Then, it dirty access to a form by entered address and gives it back. CPU time of that operation will completely depends on how many mods you have, how fast your pc is and even how long string you gave. I bet it will be about from 5 to 50 ms. 

3) When a form came from GetFormFromFile call it casts to Quest. Usually, it's a fast operation, shouldn't take more than 3 ms. 

4) Initialization previously created variable with the casted form we got previously.

5) Stop() call for the quest. It is also a native function. I cannot guess what exactly happens on native part under the function, however, usually it takes quite decent amout of time.

When you sum time for all these operations you will get the function complexity. There's also a moment with utility.wait(), but it's another long story.

So, the more functions with high complexity works per second the higher your Papyrus loaded. However, if you start several thousands of threads with low load, like GenioMaestro did, it also will load your papyrus hard, in that case damaged will be only the thread executor.

 

Quote

7.- Unrelated to the previous but "adding mods to the end of the load order" is not good advice at all. The game doesnt care if you change load order or added mods at the beginning, the middle or the end of the list. It only cares about a CORRECT load order. If one mod should go before another, then any smart user should use LOOT and/or manual ordering to make sure their list is COHERENT with the records their mods are editing. When in doubt, use TES5Edit which can very easily spot overwritten records and help users decide which mod should overwrite what or even fix incompatibilities for them. Adding mods "to the end of the load order" is often nothing more than a recipe for disaster, since some mods actually need to be loaded before others to work, and that means necessarily putting them somewhere in the middle.

Correct, my idea was to make the new mod override everything, so, no mods will affect on the new one and nobody will complain like why npcs have different skin color with body xd. To test mods I always load them at the end, then, when tested, I move them to the right place and if something changes I look at overrides. At least, if something weird happens with the new mod after moving up, I know for sure that it is because of overrides, and not that the mod is broken, what I could suppose if would place it there right away. However, fair remark.

 

Quote

8.- I recently learned another interesing thing about plugins. If on one hand, plugin order is irrelevant, except in terms of compatibility and correct order, there is at least one instance where mods can potentially be harmful even after removal. And that is if they edit records that will keep being present even if the delinquent mod is gone. Say a vanilla quest depends on a global, but a mod altered the value of that global, one can remove the bad mod, but then the global will still exist and will have the wrong value, then the game quest that uses it, will not work. Such cases do exist and are likely the one instance where a mod can actually "break" the game. Any other mod adding/removal/moving operation is completely safe, provided one actually knows what one is doing (In consideration of correct load order, orphan script cleanup and record alteration).

Absolutely correct. The game engine doesn't do any clean up for cases like this. This is why i strongly don't recommend to remove mods while playing and keep main gamesaves away from mods in which you aren't sure.

Link to comment
15 hours ago, mrsrt said:

You have to understand, when Papyrus prints dump stucks it means Papyrus cannot work correctly any further, your script part of Skyrim is broken and you cannot play the game normally anymore, no matter how many dump stucks will be printed in your log. 

I alrready discused enougth about this point. Read the entire blog and the links. Maybe you learn something.

The Stack Dumps are totally harmless and NEVER can damage the game or the quest or the scripts or the savegame.

 

15 hours ago, mrsrt said:

If you're looking for Skyrim that will work with mods that have memory and/or thread leaks, I agree, maybe it would be better to leave as less memory for this trash as possible and play the game without scripts. But what an idiotic way it could be?

Again you have a severe error in base concept. Is imposible have a memory leak in papyrus because you can not make direct memory asigment. You only can create variables that are destroyed when the script end until you use libraries for external persisten data as JContaines and PapyrusUtil.

The only way for have a problem with the scripts is cumulate scripts instances until the game can not manage more.

 

15 hours ago, mrsrt said:

You have to understand another thing, allocated Papyrus memory does not affect on amount of max working threads (and stuck dumps you will get) directly. You can get 2000 dumps with 150kb meanwhile get only 20 dumps with 64mb.

Because the parameter iMaxAllocatedMemoryBytes not determine how many memory use the Script Engine.

That is your big mistake and is explained in the Web page of the CK.

 

The iMaxAllocatedMemoryBytes parameter only determines the size of the buffer to transfer script execution requests from the Game Engine to the Script Engine.

Putting a big value on iMaxAllocatedMemoryBytes can only damage the performance of the Script Engine in overloaded games.

 

15 hours ago, mrsrt said:

However, if you have several thousands of threads in that pool and all these threads require execution context it will take up to several seconds or even minutes to last tasks has been finished. 

And that is good???

Because that only happend when you put a big value in iMaxAllocatedMemoryBytes.

 

15 hours ago, mrsrt said:

In next results we see that 64mb Papyrus holds many more threads meanwhile actually work less of them. Why it happens I explained above. The more threads in the thread pool, the less executor pefrormance. Everytime executor needs to move context it must decide where to move it between several thousands of threads. And in result less scripts work simultaneously. 

And that is good???

Because that only happend when you put a big value in iMaxAllocatedMemoryBytes (i repeat).

 

15 hours ago, mrsrt said:

You should also understand, your tests do not show the real picture of heavy scripted skyrim, you just tested Papyrus concurrency. 

If you want to see the real picture you should do several tests with changing context as it always required for researches like you try to make. For example:

- Gradual memory increasing with one thread and several threads

- Executing functions of varying complexity with internal and native calls where the function processing time is catching 

- Catching the time when a new thread was launched, started and finished with various load

And many other.

Actually, all these researches was done years ago for the design pattern was used for Papyrus. You can also read them on wiki and many specialized forums where you'll find all the answers for your questions. 

If you say that must be because you not have try my mods.

Install it and look how my mods can make "Gradual memory increasing" launching diferent number of events and execute diferent "functions of varying complexity" using diferent test types.

You can use the log file for see "the time when a new thread was launched, started and finished" and get time elapsed for compute the performance.

Link to comment
1 hour ago, GenioMaestro said:

I alrready discused enougth about this point. Read the entire blog and the links. Maybe you learn something.

The Stack Dumps are totally harmless and NEVER can damage the game or the quest or the scripts or the savegame.

 

harmless? you played the game during those stack dump tests?

complicated for stack dump to break stuff, if you are just standing in a tavern

many use alternate start, so they know that

https://www.creationkit.com/index.php?title=Threading_Notes_(Papyrus)

Quote

The Basic Rules of Papyrus Threading

  • Only one thread at a time can be doing anything with an instance of a script.
  • Whenever a thread first becomes active in a script, it "locks" that script, preventing other threads from accessing it.
  • When multiple threads try to manipulate the same instance of a script at the same time, a "queue" forms of all of those threads, which essentially wait in line for the script to become unlocked.

that's from crap kit site, the ones that made that thing, it's safe to say they know a little about it?

 

wait in line...some stuff can't be allowed to wait

if the carriage don't turn to follow the horse, it's crazy carriage time, if the door don't open before the horse crash into it, horse get stuck in the door, the npc don't ask your name when you leave the carriage, you are stuck there

 

you don't need stack dump to make stuff wait

skyrim.exe do whatever4, then start loading mod x quest

mod x quest have highter priority than helgen quest

helgen quest is ready for whatever5, skyrim.exe isn't done with mod x, it finish it before taking care of whatever5

and that's crazy carriage time

 

yes, that don't happen often, only when you click on new game, and don't appear in a cell with a statue where nothing is being done

unless you save during a siege, install some mods, and reload that siege (that's supposed to be wise?)

but if you ask helgen start to the statue, once your mods are load, you can still get crazy carriage time, thanks to stack dumps

 

game dump scripts, to make room for new ones, while it is doing that, it's not running whatever5 either, and if your carriage is already flying when it's run, it's too late

 

you give priority 100 to helgen start, you are more chance to avoid crazy carriage problem

you give conditions to the stuff that load the scripts you find in your dumps, you no longer see them in your dumps

you alt f4 everytime you die, you won't have surprises because of ram leftovers (game won't reload ironarmor.dds, it's already in your ram, game won't reload questx.pex either, it's safer to alt f4, that wondering if you won't have problems because of that death)

 

Link to comment

 

3 hours ago, GenioMaestro said:

I alrready discused enougth about this point. Read the entire blog and the links. Maybe you learn something.

The Stack Dumps are totally harmless and NEVER can damage the game or the quest or the scripts or the savegame.

Don't you find a proof that lead to your own blog when you are a part of dispute, a bit silly, lol? Especially with no exact point.

 

3 hours ago, GenioMaestro said:

Again you have a severe error in base concept. Is imposible have a memory leak in papyrus because you can not make direct memory asigment. You only can create variables that are destroyed when the script end until you use libraries for external persisten data as JContaines and PapyrusUtil.

The only way for have a problem with the scripts is cumulate scripts instances until the game can not manage more.

Oh, really? At first, only function "variables" will be cleared and cleared when function end, and not cleared, but left to destroy. You don't read what I left for you above, yes? And also you can create a memory leak what you confirm yourself further.

 

Quote

Because the parameter iMaxAllocatedMemoryBytes not determine how many memory use the Script Engine.

That is your big mistake and is explained in the Web page of the CK.

 

The iMaxAllocatedMemoryBytes parameter only determines the size of the buffer to transfer script execution requests from the Game Engine to the Script Engine.

Putting a big value on iMaxAllocatedMemoryBytes can only damage the performance of the Script Engine in overloaded games.

Show me please, where it was written. 

 

Quote

And that is good???

Because that only happend when you put a big value in iMaxAllocatedMemoryBytes.

 

And that is good???

Because that only happend when you put a big value in iMaxAllocatedMemoryBytes (i repeat).

Read whole my message carefuly. 

 

Quote

If you say that must be because you not have try my mods.

Install it and look how my mods can make "Gradual memory increasing" launching diferent number of events and execute diferent "functions of varying complexity" using diferent test types.

You can use the log file for see "the time when a new thread was launched, started and finished" and get time elapsed for compute the performance.

Okay, it's better explained here https://en.wikipedia.org/wiki/Benchmarking#Procedure

Sorry, but at this point I have doubts about your qualification to be ready to continue this conversation. I'm tired already to explain you basic things about how VM programming works, especially when you read my messages partialy. All the answers for this message you left already exist in my previous posts that already approved by your own tests and wiki links. Read carefuly 3 my last messages with its links and you will have no more questions.

As for now I regard the conversation as done. There was left already enough proofs, tests, links and explanations to confirm my words and you left no objectiveness to disprove it. 

Link to comment

Alright then.

15 hours ago, mrsrt said:

However, accordingly to your words below, your script part of the game will be broken

The point was exactly that stack dumps dont get the game "broken". Although the definition of "broken" seems to be the actual matter of debate on this point, since I only meant that script accumulation causes a delay that disrupts normal functioning. If the overload ends, then queued scripts will finish doing their thing and the pending scripts list will be cleaned allowing the game to return to normal.

It was only "broken" during the period the overload was causing script lag. And in my opinion, "broken" means it's critical and probably can't be fixed, which is not the case here, unless we're experiencing exponential accumulation that wont ever go away.

Of course, I guess I can understand why some people thinks this temporal disruption means the game is "broken" but that's just a difference of concept.

15 hours ago, mrsrt said:

Partially correct. Accumulation happens with no matter how much memory you gave to Papyrus. The only difference is that the more memory Papyrus have, the more (or larger) threads it will be able to hold in thread pool. But, as i explained above, it is the same as you have 8GB in your system and some bad program leaks only for 7GB, but when you have 32GB program leaks for 31GB. The problem not in memory size, the problem is the leak.

I don't know enough to start talking about memory management to get deeper into this topic.

I do know one thing though, and that is that the warning system exists for a reason, and we get stack dumps when we have script overload, and we dont get them when the game is not overloaded. This is simply a matter of opening saves and looking at the active script count. Any papyrus ini value setting that allows stack dumps that progressively mass scripts is dangerous. The optimal value should always be the one that allows the system to get rid of the overload in a more eficcient way. Doesnt matter how large is the dump, it matters that it doesnt generate more of them on a larger size each time.

15 hours ago, mrsrt said:

you can always write a solution yourself, like an example here https://www.creationkit.com/index.php?title=Threading_Notes_(Papyrus)

I have used locks before in some of my own scripts to make sure stuff runs in order, one thread at a time, however, that kind of lock simply doesnt work on the occasions I spoke of earlier. When you need an immediate variable from the game, it doesnt matter how many locks you put on it, the game will still retrieve in in real time when the script asks. Normally, that should be on a matter of miliseconds, but if we have script lag, the question can be delayed for minutes even. I tested it with my own mods. Script lag makes one of them simply stop working not because scripts became "corrupt" or "broken" but because it can't ask the question in time. On a script lag scenario, by the time a lock expires it will too be too late to retrieve an immediate variable to the value it had when the question should've been asked. Same power attack example, I can put a lock on "OnAnimationEvent" but when it reads it, power attack will no longer be 1, it's already over and will return 0. Some variables are stored at the call of an event, and you can access them later even if timing is incorrect, and that's where locks can come in handy, but some others are independent from the event and only exist in real time.

15 hours ago, mrsrt said:

When you sum time for all these operations you will get the function complexity

And just as I said, the concept behind this is calculating every script's impact based on how long it takes to run all its functions. Which is insane if you seek to categorize mods as "script heavy" You'd need to calculate the impact of every single script of every single mod on equations that become more and more complicated considering other variables like how often the script is used etc... No human can achieve that kind of insanity.

Granted, we can figure out "a script takes longer to run" based on how many time/memory demanding lines it has, it is exactly why stuff is considered "faster" or "slower" and why we shoud store"PlayerRef" as a variable inside a script instead of asking "Game.GetPlayer()" every time we need to reference us. But that's just measuring how fast a script is, not how "heavy" it is as people generally understands it. The "heavy script" myth is about scripts that can potentially destroy games due to how complex they are. A slow script is not necessarily "heavy". It's just slow.

 

Link to comment
1 hour ago, Myst42 said:

Alright then.

The point was exactly that stack dumps dont get the game "broken". Although the definition of "broken" seems to be the actual matter of debate on this point, since I only meant that script accumulation causes a delay that disrupts normal functioning. If the overload ends, then queued scripts will finish doing their thing and the pending scripts list will be cleaned allowing the game to return to normal.

It was only "broken" during the period the overload was causing script lag. And in my opinion, "broken" means it's critical and probably can't be fixed, which is not the case here, unless we're experiencing exponential accumulation that wont ever go away.

Of course, I guess I can understand why some people thinks this temporal disruption means the game is "broken" but that's just a difference of concept.

Yes, you're right here. I consider my game broken if it cannot process scripts correctly. Some misunderstanding may occur at this point, I agree.

 

1 hour ago, Myst42 said:

I don't know enough to start talking about memory management to get deeper into this topic.

I do know one thing though, and that is that the warning system exists for a reason, and we get stack dumps when we have script overload, and we dont get them when the game is not overloaded. This is simply a matter of opening saves and looking at the active script count. Any papyrus ini value setting that allows stack dumps that progressively mass scripts is dangerous. The optimal value should always be the one that allows the system to get rid of the overload in a more eficcient way. Doesnt matter how large is the dump, it matters that it doesnt generate more of them on a larger size each time.

If it would be responsible only for critical situations you'd be right. But higher memory value will also help to handle more good scripts too. If memory will not be enough papyrus will have bad times with trying to find free memory for required operations what will eventually slow your Papyrus or even compeltely stuck it. Actually, maybe 64mb is not the best value, but, i'm afraid, we have no stuff to find the most fit. At least, it doesn't damage a properly working game with no broken scripts. 

 

1 hour ago, Myst42 said:

I have used locks before in some of my own scripts to make sure stuff runs in order, one thread at a time, however, that kind of lock simply doesnt work on the occasions I spoke of earlier. When you need an immediate variable from the game, it doesnt matter how many locks you put on it, the game will still retrieve in in real time when the script asks. Normally, that should be on a matter of miliseconds, but if we have script lag, the question can be delayed for minutes even. I tested it with my own mods. Script lag makes one of them simply stop working not because scripts became "corrupt" or "broken" but because it can't ask the question in time. On a script lag scenario, by the time a lock expires it will too be too late to retrieve an immediate variable to the value it had when the question should've been asked. Same power attack example, I can put a lock on "OnAnimationEvent" but when it reads it, power attack will no longer be 1, it's already over and will return 0. Some variables are stored at the call of an event, and you can access them later even if timing is incorrect, and that's where locks can come in handy, but some others are independent from the event and only exist in real time.

And this is why Papyrus performance is that much important. Actually, these things you're talking here called concurrency. https://en.wikipedia.org/wiki/Concurrency_(computer_science) And there're many things you should know to work with it properly, like:

Atomicity https://en.wikipedia.org/wiki/ACID

Volatility https://en.wikipedia.org/wiki/Volatile_(computer_programming)

Synchronization https://en.wikipedia.org/wiki/Synchronization_(computer_science)

And many others. It's a very complicated thing and profession oriented. 

 

As for your case with power attacks with stressed papyrus you should not access to dynamic global variables with threads you don't know when will be started. Usually, in cases like this, you should suspend the thread that changes your dynamic variable until the thread that works with the variable finish. It is a common multithreading situation. But Papyrus (thankfully) does not provide stuff like this because suspending the render thread will cause freeze. So, only I can say u took a wrong way to implement that thing with Papyrus.

 

2 hours ago, Myst42 said:

And just as I said, the concept behind this is calculating every script's impact based on how long it takes to run all its functions. Which is insane if you seek to categorize mods as "script heavy" You'd need to calculate the impact of every single script of every single mod on equations that become more and more complicated considering other variables like how often the script is used etc... No human can achieve that kind of insanity.

Granted, we can figure out "a script takes longer to run" based on how many time/memory demanding lines it has, it is exactly why stuff is considered "faster" or "slower" and why we shoud store"PlayerRef" as a variable inside a script instead of asking "Game.GetPlayer()" every time we need to reference us. But that's just measuring how fast a script is, not how "heavy" it is as people generally understands it. The "heavy script" myth is about scripts that can potentially destroy games due to how complex they are. A slow script is not necessarily "heavy". It's just slow.

Not certainly in that way. When a script requests to process some function it calls for executors context. And how long the function will hold the context defines the function complexity. But there're 2 small marks.

1) If you call a native function, it delegates CPU time to the game engine, however Papyrus still responsible for that call. 

2) If you call utility.wait() the current thread goes to sleep for a limited time. Thread executor does not stop it, it's a special thread state where a thread remains active, but doesn't consumes CPU time. Actually, it's a long story.  

In other words: 

- If a function holds context for 0.9 seconds (w/o sleep) every one second, we can consider that script as heavy.

- If a function holds context for 10 ms, but works every 0.5 seconds we cannot consider it as heavy.

- If a function works for 5 seconds every 6 seconds but most of the time sleeps, we cannot consider the script as heavy. However it's a bad designed script and will slow executor different way, but it's an another long story.

- If a function has infinite loops or recursive calls like:

function loopHealExample()
  Actor player = Game.GetPlayer()
  while player.IsInLocation(someLoc)
    player.DamageObject(-1)
  endwhile
endfunction
  
function recursiveHealExample()
  Actor player = Game.GetPlayer()
  if player.IsInLocation(someLoc)
    player.DamageObject(-1)
    recursiveHealExample()
  endif
endfunction

It is an extremly heavy script that consumes 100% of CPU time that can give Papyrus for a single thread (usually one cpu core). If you meet some function like this i pretty recommend to rewrite it or get rid of the mod. Furthermore, in my examples they are not literally infinte coz have conditions to exit, but if you write something like while (true) with unreachable condition to exit this thread will remain forever, even if you remove the mod and stop owner quest.  

To implement things like this RegisterForUpdate() must be used, but, I guess, you already know it. 

 

Link to comment

Thread updated.

- 4GB Patch section replaced with a warning for windows 7 users. The 4GB patch does not expand the game memory due to it's already expanded with LAA flag.

- ContinueGameNoCrash link updated with actual version of the mod.

Papyrus performance section now has a bit better explanation. Detailed information will not be provided in the main message to remain the guide newbie-friendly. However, anybody interested still can read the topic. 

High FPS patches section was expanded with a better way to solve physic problems, provided by the perfect havok fix https://www.nexusmods.com/skyrim/mods/91598 Also, now the section recommended even for those who plays even with 60 fps. 

Critter Spawn Fix section now optional

- Recommendation about load order modification was edited and separated. 

 

Link to comment
On 8/25/2019 at 1:03 PM, GenioMaestro said:

Because the parameter iMaxAllocatedMemoryBytes not determine how many memory use the Script Engine.

That is your big mistake and is explained in the Web page of the CK.

 

The iMaxAllocatedMemoryBytes parameter only determines the size of the buffer to transfer script execution requests from the Game Engine to the Script Engine.

Putting a big value on iMaxAllocatedMemoryBytes can only damage the performance of the Script Engine in overloaded games.

On 8/25/2019 at 5:06 PM, mrsrt said:

Show me please, where it was written. 

In the web page of the CK but you simply ignore the WARNING. I telling it to you for 3 days but you are so totally convinced of your idea that you not consider the probabylity of you can be wrong. Read it again:

WARNING: this setting is for stack size(Call Stack) not heap size(Data Size)

 

You can read a similar explanation in the post linked by ASlySpyDuo, exactly this link:

The stack is a structure in memory that holds information about the current script routines being run, the values being passed between these routines, and various other related information. 

 

This parragraph in your first post is totally false:

Quote

These values will help Papyrus engine to handle overloaded situations better, in result it may decrease delay time between script events.

 

The value of iMaxAllocatedMemoryBytes have none relation to the memory used by the Script Engine. As you can see in my logs, the Script Engine can have thousands and thousands of scripts waiting for execution. Is imposible put that information the 75k of the default configuration. The memory usage of the Script Engine is dinamic.

 

Change the value of the iMaxAllocatedMemoryBytes not increase the performance in normal situations and only can increase the problems when the game is overloaded and i demostrate it. Is the size of the Call Stack.

 

Thecnical Data:

Spoiler

The Script Engine not know what scripts must be executed because is a simple executor. Only the Game Engine know if an object have a script atached because that information is inside the ESP.

 

When the Game Engine need execute a script put a "Request for Execution" in the Stack Call of the Script Engine.

In the next frame, when the Script Engine get their 1.2 milliseconds of time, look the Stack Call and transfer that request to their own data memory, marking it as "Requested Call", and clear the Stack.

Next, the Script Engine initialize the script with the parameters and start executing the script.

 

 

In normal situation the game only put a few "Request for Execution" that fit perfectly in 75k or in 150k and increase the value of iMaxAllocatedMemoryBytes not have any efect.

 

But when we have an overloaded situation the game fill the Call Stack and the value of iMaxAllocatedMemoryBytes act as a brake for not transfer all the "Request for Execution" at the same time and not saturate the Script Engine and preserve their performance. In this way, we have less scripts waiting for execution and more scripts in execution status.

 

If we put a big value in iMaxAllocatedMemoryBytes we are removing the brake. In an overloaded situation the Game Engine can put thousands of "Request for Execution" in the buffer, because have a gigantic size, and the Script Engine acept all of it.

In this way, the Script Engine waste 90% of the time transfering the data from the Call Stack to their own data heap and making request of memory for acomodate thousands of new scripts.

The consecuences can be catastrophics as you can see in my logs.

 

Put a big value in iMaxAllocatedMemoryBytes only can damage the performance and increase the problems caused by a puntual overload in the Script Engine resulting in less scripts in execution and of course less performance.

 

As I always say, test your own game, experiment with different values and settings and see the results with your own eyes.

 

But please, stop saying I don't know what I'm talking about. Check my words making test in your game before saying that I am wrong. If you can demostrate that I'm wrong, show your Papyrus log.

Link to comment
5 hours ago, GenioMaestro said:

 

WARNING: this setting is for stack size(Call Stack) not heap size(Data Size)

 

what i understand from that, having no idea what those heap size or stack size are

 

if i put event abracadabra in my coffer script, with

trace('stack dump stack dump what's you gonna do when it come for you stack dump stack dump what's you gonna do when it come for you /stack dump stack dump what's you gonna do when it come for you /stack dump stack dump what's you gonna do when it come for you /stack dump stack dump what's you gonna do when it come for you /stack dump stack dump what's you gonna do when it come for you /stack dump stack dump what's you gonna do when it come for you /stack dump stack dump what's you gonna do when it come for you....'

 

that won't make those dumps appear faster, because it's not load in script memory

unlike coffer.additem(ironarmor), what they call stack size, the stuff load from that script

 

so when you see 13h00 cf load on npc x

13h01 cf load on npc y

and whatever in your log

that help you get stack dumps

why not delete those useless trace(), they are posting those logs in that section, they don't read them

 

 

Link to comment
On 8/25/2019 at 4:27 PM, mrsrt said:

As for your case with power attacks with stressed papyrus you should not access to dynamic global variables with threads you don't know when will be started. Usually, in cases like this, you should suspend the thread that changes your dynamic variable until the thread that works with the variable finish. It is a common multithreading situation. But Papyrus (thankfully) does not provide stuff like this because suspending the render thread will cause freeze. So, only I can say u took a wrong way to implement that thing with Papyrus.

Continuing on this idea seems like thread derailment tbh, but suffices to say that acquiring real time variables was likely the only way to get the thing I wanted done, done. Ask around and I doubt you'll find a flawless way of identifying power attacks on an animation event. Hell, if you do, let me know and I'll try to update what I can. bAllowRotation is the only scripted thing that has been found to work in such cases. I can agree that whenever one can use more stable variables, one should do it, but sometimes, a mod's goals just need dynamic variables or else they wont do the thing they're supposed to do. Sometimes there are many ways to do things and one has a choice which should be optimal, sometimes, there is only one choice.

On 8/25/2019 at 4:27 PM, mrsrt said:

It is an extremly heavy script that consumes 100% of CPU time that can give Papyrus for a single thread (usually one cpu core). If you meet some function like this i pretty recommend to rewrite it or get rid of the mod. Furthermore, in my examples they are not literally infinte coz have conditions to exit, but if you write something like while (true) with unreachable condition to exit this thread will remain forever, even if you remove the mod and stop owner quest.  

Ah, the infamous infinite While loop without any breaks.

Yeah, I've seen some of those, and even tested myself why they shouldn't exist.

Still though, my original point is that there is no clarity of concept on what "heavy" scripting means. You have your definition, other people have theirs, and most users have not even the slightest clue of what they're even taling about, going as far to say that some users avoid mods with scripts in general because they believe that the more scripts a mod has, the more likely it's gonna "break" their game.

You use the concept mostly as "slow" scripts considering how much time/memory they consume, and it's valid from your point of view. I even kinda like it myself. However I first encountered that concept as just a slow script.

Now that While loop and such things, I dont even consider functional scripts. Those kinds of abominations are just bad coding.

Its a broken mod to even begin with, and I'd say there is a difference between a script that works and one that's broken. A script can be very very slow and take a lot of time and resources to finish, but if your game can take it, it's not "broken". And given the enormous capacity of the game's engine to process scripts, going from 10-250 scripts on a regular basis, and tolerating overloads of thousands while still coming out alive, I'd say the game is more than equipped to handle a slow script on normal circumstances as long as one thing happens: that it ends without complicating the system any further.

Eh... like I said on my first post here, we've had similar threads before, and my opinion remains the same: everyone has their own definition of certain concepts. "heavy scripting" being one of them. I agree it may be true some scripts consume more resources, but who cares as long as they actually work. Now bad coding, is an entirely different kind of abomination.

Link to comment
13 hours ago, GenioMaestro said:

...

Okay, to make it finally clear I made several tests on my Skyrim.

 

Test 1 - Simple memory leak

At first I wrote a simple memory leak to see how Skyrim will behave:

Function loop()
	Debug.Trace("[MEMORY TEST] loop started")
	int i = 0
	While true
		Float[] arr = new Float[128]
		i += 1
		If (i % 1000) == 0
			Debug.Trace("[MEMORY TEST] " + i + " allocations happened")
		EndIf
	EndWhile
EndFunction


64 mb test:

[08/26/2019 - 08:58:53PM] [MEMORY TEST] loop started
...

[08/26/2019 - 08:59:14PM] [MEMORY TEST] 2342000 allocations happened

Actually, at this point Skyrim's memory simply ends. Task manager shows about 4GB memory consumption, the game has hard freezes, or simply crashes. 

 

75kb test:

[08/26/2019 - 09:02:27PM] [MEMORY TEST] loop started
...

[08/26/2019 - 09:02:48PM] [MEMORY TEST] 2389000 allocations happened

Absolutely the same picture happens with 75kb, nothing changes. 

 

Findings: 

- Float type has 4 byte size, we allocate an array that contain 128 floats what is 512 bytes each iteration. It means that in the first run was allocated 1 199 104 000 bytes, what is actually 1.1 GB only for array size. 

- Maximum memory to complete one function does not limited by Papyrus nor with iMaxAllocatedMemoryBytes nor with anything else.

 

Test 2 - Extreme memory consumption

Let's see how Papyrus survive several allocations about 1 GB each. To make it we need to allocate a big amount of memory which will not reach the total application limit. In the previous test both runs was able to handle about 2.34 - 2.39 millions of allocations. So, 2 mills should be enough for the test.

Code was properly edited to stop in time:

Function loop()
	Debug.Trace("[MEMORY TEST] loop started")
	int i = 0
	While true
		Float[] arr = new Float[128]
		i += 1
		If (i % 1000) == 0
			Debug.Trace("[MEMORY TEST] " + i + " allocations happened")
		EndIf
		If i >= 2000000
			Return
		EndIf
	EndWhile
EndFunction

Also the test will be launched again in 3 seconds after finish using RegisterForSingleUpdate(3)

 

75kb test:

[08/26/2019 - 09:51:51PM] [MEMORY TEST] loop started
...
[08/26/2019 - 09:52:06PM] [MEMORY TEST] 2000000 allocations happened
[08/26/2019 - 09:52:09PM] [MEMORY TEST] loop started
...
[08/26/2019 - 09:52:12PM] [MEMORY TEST] 310000 allocations happened

At this point Skyrim again reached the memory limit.

 

64mb test:

[08/26/2019 - 09:58:16PM] [MEMORY TEST] loop started
...
[08/26/2019 - 09:58:31PM] [MEMORY TEST] 2000000 allocations happened
[08/26/2019 - 09:58:34PM] [MEMORY TEST] loop started
...
[08/26/2019 - 09:58:39PM] [MEMORY TEST] 375000 allocations happened

Nothing changed for 64mb, as expected.

 

Let's give papyrus more time to destroy unused objects. The next loop will be started in a minute (RegisterForSingleUpdate(60)).

 

64mb test:

[08/26/2019 - 10:05:02PM] [MEMORY TEST] loop started
...
[08/26/2019 - 10:05:17PM] [MEMORY TEST] 2000000 allocations happened
...
[08/26/2019 - 10:06:17PM] [MEMORY TEST] loop started
...
[08/26/2019 - 10:06:22PM] [MEMORY TEST] 369000 allocations happened

75kb test:

[08/26/2019 - 10:12:14PM] [MEMORY TEST] loop started
...
[08/26/2019 - 10:12:29PM] [MEMORY TEST] 2000000 allocations happened
...
[08/26/2019 - 10:13:29PM] [MEMORY TEST] loop started
...
[08/26/2019 - 10:13:37PM] [MEMORY TEST] 372000 allocations happened

Surprisingly, nothing changed again. It is a very bad sign, that may mean Papyrus does not clear its memory at all during game session. But, in games clearing also may happen while loadings, for example, when we change location.

I wasn't able to test it properly, because almost every my action after the first run of the test led to CTD: npc talk, console opening, menu opening and etc. So, I changed the test to allocate 100k times per 20 seconds and started to run across locations: LAL prison -> Banned Mare -> Whiterun -> Banned Mare -> Whiterun and so on. Actually, it helped to survive 4 minutes until CTD, but mostly it happened because VM was frozen most of the time during loadings. Only 8 loops (100k iterations each) was executed, and eventually it led to CTD. It is very unhappy results that say the more scripts allocate, the less stable it will be.

 

Test 3 - Stacks

Let's back to iMaxAllocatedMemoryBytes. Accordingly to your links, if this param affects only on stuck frame size then only stuck call will be affected with the value changing. Let's check it.

Actually, it's very easy to test with recursive calls. Each recursive call will increase the stuck with return calls until overflow will happen. Code was edited appropriately:

int itr = 0
Function recursiveAlloc()
	Float[] arr = new Float[128]
	itr += 1
	If (itr % 1000) == 0
		Debug.Trace("[MEMORY TEST] " + itr + " allocations happened")
	EndIf
	recursiveAlloc()
EndFunction

 

75kb test:

[08/26/2019 - 11:49:27PM] [MEMORY TEST] recursion launched
...
[08/26/2019 - 11:51:10PM] [MEMORY TEST] 1078000 allocations happened

At this point I got CTD when clicked ESC button. Memory was gradually leaking, but much slower compared to loop way. Last numbers I saw about 3GB.

 

64mb test:

[08/27/2019 - 12:05:40AM] [MEMORY TEST] recursion launched
...
[08/27/2019 - 12:08:29AM] [MEMORY TEST] 258000 allocations happened

Here I just stopped the test because the leak was very slow. Memory was about 2.1GB

 

1kb test:

[08/27/2019 - 12:18:09AM] [MEMORY TEST] recursion launched
...
[08/27/2019 - 12:19:43AM] [MEMORY TEST] 1067000 allocations happened

At this point memory reached 3GB, I clicked ESC and got CTD.

 

And now the same with no array allocation:

int itr = 0
Function recursiveAlloc()
	itr += 1
	If (itr % 1000) == 0
		Debug.Trace("[MEMORY TEST] " + itr + " iterations")
	EndIf
	recursiveAlloc()
EndFunction

(Each run about 2 minutes)

 

1kb test:

[08/27/2019 - 01:39:12AM] [MEMORY TEST] recursion launched
...
[08/27/2019 - 01:41:26AM] [MEMORY TEST] 1169000 iterations

75kb test:

[08/27/2019 - 01:45:36AM] [MEMORY TEST] recursion launched
...
[08/27/2019 - 01:47:55AM] [MEMORY TEST] 1179000 iterations

64mb test:

[08/27/2019 - 01:50:21AM] [MEMORY TEST] recursion launched
...
[08/27/2019 - 01:52:30AM] [MEMORY TEST] 259000 iterations

 

And 2 more runs for 10 minutes

128 byte test:

[08/27/2019 - 02:00:25AM] [MEMORY TEST] recursion launched
...
[08/27/2019 - 02:01:00AM] [MEMORY TEST] 643000 iterations
...
[08/27/2019 - 02:02:00AM] [MEMORY TEST] 995000 iterations
...
[08/27/2019 - 02:03:00AM] [MEMORY TEST] 1253000 iterations
...
[08/27/2019 - 02:04:00AM] [MEMORY TEST] 1502000 iterations
...
[08/27/2019 - 02:05:00AM] [MEMORY TEST] 1679000 iterations
...
[08/27/2019 - 02:06:00AM] [MEMORY TEST] 1842000 iterations
...
[08/27/2019 - 02:07:00AM] [MEMORY TEST] 1990000 iterations
...
[08/27/2019 - 02:08:00AM] [MEMORY TEST] 2128000 iterations
...
[08/27/2019 - 02:09:00AM] [MEMORY TEST] 2272000 iterations
...
[08/27/2019 - 02:09:34AM] [MEMORY TEST] 2341000 iterations

 

75kb test:

[08/27/2019 - 02:18:29AM] [MEMORY TEST] recursion launched
...
[08/27/2019 - 02:19:00AM] [MEMORY TEST] 547000 iterations
...
[08/27/2019 - 02:20:00AM] [MEMORY TEST] 925000 iterations
...
[08/27/2019 - 02:21:00AM] [MEMORY TEST] 1186000 iterations
...
[08/27/2019 - 02:22:00AM] [MEMORY TEST] 1406000 iterations
...
[08/27/2019 - 02:23:00AM] [MEMORY TEST] 1598000 iterations
...
[08/27/2019 - 02:24:00AM] [MEMORY TEST] 1801000 iterations
...
[08/27/2019 - 02:25:00AM] [MEMORY TEST] 1949000 iterations
...
[08/27/2019 - 02:26:00AM] [MEMORY TEST] 2087000 iterations
...
[08/27/2019 - 02:27:00AM] [MEMORY TEST] 2218000 iterations
...
[08/27/2019 - 02:28:19AM] [MEMORY TEST] 2379000 iterations

 

These are very interesting results which lead to next findings:

iMaxAllocatedMemoryBytes does not limit stack frame size (at least in obvious way)

- 64MB option slows recursive calls. Good it or bad is, actually, open question. With recursive leak I was even able to play the game some time, when 75kb led to CTD almost instantly. I bet, the game can survive a play session with several non-intensive recursion leaks if 64MB will be setted. 

- Papyrus does not even warn if a recursion happens. It is an another unhappy finding. Usually, programming languages has limited stacks and if you reach the limit the stack overflow error occurs. In Papyrus your game can work with several recursions and you will never know about it, unless find it yourself.

 

And now let's test your words:

Quote

The iMaxAllocatedMemoryBytes parameter only determines the size of the buffer to transfer script execution requests from the Game Engine to the Script Engine.

We will call several native functions in an infinite loop until the buffer will be filled:

Function nativeCallLoop()
	Debug.Trace("[MEMORY TEST] native call loop started")
	int i = 0
	While true
		Game.GetPlayer().GetWorldSpace().GetKeywords()
		i += 1
		If (i % 1000) == 0
			Debug.Trace("[MEMORY TEST] " + i + " calls happened")
		EndIf
	EndWhile
EndFunction

 

75kb test:

[08/27/2019 - 04:28:04AM] [MEMORY TEST] native call loop started
...
[08/27/2019 - 04:29:53AM] [MEMORY TEST] 6000 calls happened

64mb test:

[08/27/2019 - 04:34:22AM] [MEMORY TEST] native call loop started
...
[08/27/2019 - 04:36:12AM] [MEMORY TEST] 6000 calls happened

The same speed, the same memory consumption and nor a trace of any buffer limit. Let's make the operation simplier to increase total iterations:

Function nativeCallLoop()
	Debug.Trace("[MEMORY TEST] native call loop started")
	int i = 0
	While true
		game.GetPlayer()
		i += 1
		If (i % 1000) == 0
			Debug.Trace("[MEMORY TEST] " + i + " calls happened")
		EndIf
	EndWhile
EndFunction

And just to be sure I set iMaxAllocatedMemoryBytes to 16 bytes:

[08/27/2019 - 04:49:06AM] [MEMORY TEST] native call loop started
...
[08/27/2019 - 04:51:06AM] [MEMORY TEST] 19000 calls happened

What buffer did you talk about and how to touch it in an environment, where native calls doesn't produce new objects?

 

Actually, in the research should also be performance tests, but it is hard to test Papyrus performance, because its environment has no access to system nano time (by default) and logging doesn't show even millis. Currently I have no more time to finish this, if you want you can continue, but you should know what and how to test. 

As for now we can only be sure that iMaxAllocatedMemoryBytes 

- does not affect on total Papyrus memory 

- does not limit the stack size

- increases total amount of threads that executor can hold in some cases

- slows recursive calls 

 

P.S. I didn't provide Papyrus logs because i'm not sure forum will allow to upload that much files, however key events included in code insertions. If you want to see what exactly happens in some case you can launch a test yourself, all code included.

Link to comment

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue. For more information, see our Privacy Policy & Terms of Use