So I just tested it on DX12 only using [SystemSettings] r.ShaderPipelineCache.Enabled=1 D3D12.PSO.DiskCache=1 D3D12.PSO.DriverOptimizedDiskCache=1 I can hardly detect any stutter now, only infrequent very short (about 50 ms) freezes - DX12 is the cure it seems
Per the Unreal Pipeline State Object Cache documentation [1], the PSO cache is not functional in games that don't use the "Share Material Shader Code" option within Unreal's project settings. We don't use shared material code because it's bad for modding and bad for memory on the Non-PC platforms, so enabling this should have no impact whatsoever. Happy to accept proof to the contrary if anyone understands this better than me. [1]: https://docs.unrealengine.com/4.26/en-US/SharingAndReleasing/PSOCaching/PSOReference/
Then, it's at least DX12 that does make a difference, shifting workload from CPU to GPU if I understood that correctly. Needless to say a prerequisite for evaluating is a clean system, free of bloat- and adware or download managers and useless 3rd party AV software.
Most likely yes, it's very hard to find out what though with poor access to the engine itself. It could very well be the shaders themself. The problem with editing too much settings in the settings.ini is no way to know if something changed in a future patch and the settings.ini is working against it.
DX12 is faster in general but tends to introduce more hitching on the first run-through of the level (making the PSO cache, funnily enough) It's likely to be the default at some point in the future. If you don't get any stability issues running with DX12 (we don't QA it internally) then I'd say go for it. As for the other suggestions in this thread, I can't comment as to the validity of any ini file tweaks or make any recommendations, and I can't comment on the specific guidance for DTG provided in the OP but I will say that this isn't the first time I've seen this thread. Waiting for the graphics driver to build and cache the shaders for the first time and waiting for garbage collection are the key causes of hitches within TSW and we're always working on improvements in these areas.
One other thing I wonder about, is how many caches is too many? DX, Nvidia's caches After first run of a route I have very few hitches, I dont know why others have a lot more (longer) stutter/hitches after the compiling should be done. It was the same in TSW3 with my 2070 Super and now a 4070. While you are here, can't we get a (vegetation/tree) lod setting for PS5/Xbox? I'd rather have it locked at 30fps with more lod range then what it is now, it looks very bad tbh.
Why did DTG show a graph with DX12 in use on TSW4? Are they planning on supporting DX12 in TSW4 on release?
You should switch to DX12 by default since we are in 2023 and all video cards have drivers optimized more for DX12 than 11. The performance increase is quite noticeable and the initial stutter when running a route can be countered by forcing shader compilation at start of the game or even when loading a route like most modern games do. Even if it takes minutes or hours to compile shaders a smooth experience afterwards is well worth it.
For technical reasons right now it's not possible to promise an explicit shader precompilation function if we switch to DX12 (for reasons related to the flag discussed above) in order to eliminate all shader compilation hitches we'd need to load every asset from every DLC in order to be sure we'd found all of the shader code inside of it, which is going to be a very crunchy, slow process for users not on Solid State storage. We're looking into what we can do to offer a precompilation function that's better than "sit in the cab of an AI service and walk away from your machine for an hour", no promises obviously.
Ignoring the DX12 part, I might be misunderstanding this, but I would presume you would load only the shaders for a particular DLC when starting that DLC, rather than trying to load every shader for every DLC when the game is starting up. I don't know.
Well thank you for your reply and the very detailed explanation. I am really glad that the major problem with stuttering is being looked into and you are trying to find alternatives to improve this. A precompilation function sounds great indeed, its far better to let the computer do its thing a couple of minutes than drive trough a thunderstorm of stutters or use the AI Driver as a shade rcompilation task just like you mentioned above Since you are trying to find a workable solution for the shader compilation stutter problem i am guessing you are also seriously considering using DirectX12. If it helps a lot of us have been using Dx12 since TSW2 at least with fantastic results performance wise, especially in really crowded areas like big stations. TSW4 also runs on DX12 like a dream once you make it trough a first run.
As above, with DX12 and a couple of ini tweaks the game runs really very well.. still a couple of very small/micro stutters here and there but otherwise very well. You could introduce shader compilation as a menu option in the first instance, let us sit around for 5-10 minutes per-route while it works through them. Hopefully by that time AMD let us set our cache size so we can have most of our routes pre-cached and ready to play!
They never listened to you, why will they listen now? Forget! next year there will be another part of the dlc called TSW5. There's content waiting to be updated for more than two years, but that's okay, there are people to support DTG's adventures.
Thanks a lot for participating in the discussion Will. I think we need to raise the attention to this topic, because it´s not new, that´s why my posts are not new either. I also understand that DTG can´t officially support modded ini files as Matt already said that several times in the past. I first want to clarify this is not a matter of blaming anyone´s work. We all (devs and players) can get a much better game with relatively low efforts and in relatively short time, but we need to take this process seriously and perform the required changes and tests and that can be only done by DTG as they have more deep knowledge of the code and UE4 than any user in the forum. Improved configuration leaves also room to include other features to run on those threads, which is something devs can take advantage of. But optimization has to go in the correct direction. The first thing to change is the threads sync method. Unreal clearly declares that RHI method is the one that should be used, not the old render thread sync. That change is the first step and requires no additional development other than changing a parameter in project as UE4 already supports RHI sync. I invite you to lock fps in game to something your system can handle and use DX12 with only the blue portion of my proposed engine.ini on a route where you already have shaders generated to remove the first time shaders generation effect. That alone is enough to prove that the default parameters need quite a lot of optimization as you will see a noticeable improvement after just few seconds. Shader compilation is not a problem of DX12 nor Unreal. Hitching will be reproduced always on the first run, everywhere. DX12 won´t work as optimally as it could without a configuration on the project side and that´s true, but it still works because DX12 PSO is not meant only for shaders but as an overhaul of the complete way CPU-GPU work together. It´s therefore relevant for GPU geometry processing as well for instance: https://learn.microsoft.com/en-us/w...naging-graphics-pipeline-state-in-direct3d-12 See this reference (for UE5 but more detailed than the older one for UE4) for the particular case of Unreal: https://docs.unrealengine.com/5.0/en-US/optimizing-rendering-with-pso-caches-in-unreal-engine/ User Cache File Even if you provide the PSO cache with the game, the user can run into content not covered during the collection. Some drivers can provide their own cache, but to be more independent from driver behavior, the game by default attempts to collect and save missed PSOs to local user cache files. These are located in the game's Saved directory ( FPaths::ProjectSavedDir() ), the same directory as for the game's user settings. The application loads those user cache files on start and merges their contents with the files included in the build. User cache PSO files are in recorded cache format. This means they reference the shaders using their SHA hashes, and won't be usable across large game updates that change a lot of content. Because of that, each file embeds a game version, which is checked against the running application. It is configured in DefaultGame.ini, as in the example below, and needs to be increased each time the application issues a possibly incompatible update, such as an update that contains content changes or significant rendering code changes. Those cache files are generated at "\Documents\My Games\TrainSimWorld4\Saved". As one user explained above simply the activation of the pipeline cache alone produces an instant improvement, as you would expect. Cheers
It was in the release FAQ or something similar. Showing the comparison between DX11 and DX12 compilation stutter in Unreal Engine 4. I will try and find it.
Oh yes there seemed to be a narrative floating around that DX12 somehow fixed hitches and we just wanted to put some science behind that. Matt.
Will knows what he's talking about. I don't know geloxo but he makes sense to me - however - I rely on Will as the single person who knows Train Sim World better than literally anyone in the world, so, i'll wait for him to comment further.
Whatever happens with optimization, I've said it before and I'll say it again: I have no intention of faffing around with .ini files and whatnot. I appreciate if anyone wants to take the time to do that for themselves, but optimisation can only be done truly under the hood. I'm the player; optimisation is DTG's responsibility. Besides, console players don't have that luxury (I'm not playing on console, but I like to think beyond my own experience). Luckily, I know Matt will agree with this viewpoint and will lead Will and the team to do what it takes to get this software running better than ever, hopefully sooner than later, via patches to TSW4.
I have tried the latest iteration of .ini adjustments and it does reduce the hitching at the expense of overall average FPS. I'm still on a RTX 3080ti and 13700k whereas geloxo has a 4090 and a similar cpu.
First run, first impression, this is it, massive improvement, slightly tweaked for my RTX2080Ti + 5600X (Eco).
Thanks a lot for the comment Matt. I will be pretty honest here. I´m onboard since TSW 1 and I have been a loyal customer, I supported this for many years and I currently own almost the complete DLC collection. So I really don´t want anything bad for the game nor for people working on it despite I´m very critic in some posts. I understand that things need to give profit but I also think game can and deserves to run much better than it does without DTG having to invest huge efforts on it nor disturbing roadmaps. I was able to check it myself after just a couple of days testing it so if Will and the other experts, that as you say know the real details behind the core better than anyone else, could take just some days do go a bit deeper on this I´m sure they will come with a much optimal and clean solution, balanced for everyone as well, that you can include in the roadmap. Many of the things are just a matter of parameters, so it should be relatively easy for them to define the correct technical approach according to game needs. That would give us a better baseline to start the TSW4 journey and more reasons to go back to the store to continue collecting DLCs now and in the future. Cheers
Also being a long time player of both TSC (since 2015, 747 Steam DLC) and TSW (since 2016, 74 DLC) and a critical mind, I fully support geloxo. Well written.
You "just" need to set the parameter but you also need to measure and ensure that it's having a meaningful impact on performance. When I did so (a year ago probably) there wasn't a clear improvement and it needs more time spent on research. If it was as clear-cut as "turn on switch for better performance" I would simply do it. Bear in mind that TSW contains a multithreaded simulation engine running in parallel with the RHI, Game and Render threads, as you'll be able to see when the Public Editor releases. But just to put it to bed, I won't be turning on Unreal's PSO cache, at least not for TSW4. It's not possible to enable it with the current project config and changing the project config to match is hard (due to our engine changes and DLC structure).
geloxo: I'm currently testing a combination of your very usable settings (from #43 above) with JetWash's TOD4 suggested lighting enhancements for TSW4 and using Nvidia Image Scaling (NIS)as obtained here: https://www.trainsimcommunity.com/m...ini-tweaks/i3889-tsw4-to-d4-lighting-overhaul I realise at this early test stage that some settings may override/nullify others. I will adjust as I play with them a bit more. Nevertheless, currently the game looks really good and my frame rate has not dropped below 51 in any of the TSW 4 scenarios that I have tried. My current test engine.ini: Using DX-12 [SystemSettings] foliage.LODDistanceScale=5 gc.MinDesiredObjectsPerSubTask=20 gc.MinDesiredObjectsPerSubTask=50 gcNumRetriesBeforeForcingGC=1 gc.TimeBetweenPurgingPendingKillObjects=900 r.AllowLandscapeShadows=1 r.AmbientOcclusionLevels=3 r.AmbientOcclusionMethod=1 r.CreateShadersOnLoad=1 r.DFDistanceScale=10 r.DFFullResolution=1 r.DistanceFieldShadowing=1 r.GTSyncType=1 r.LightMaxDrawDistanceScale=50.0 r.MinScreenRadiusForLights=0.00 r.OneFrameThreadLag=1 r.RenderTargetPoolMin=3000 r.Shadow.CSM.MaxCascades=5 r.Shadow.CSM.TransitionScale=2 r.Shadow.DistanceScale=2 r.Shadow.FilterMethod=0 r.Shadow.MaxCSMResolution=4096 r.Shadow.MaxResolution=4096 r.Shadow.RadiusThreshold=0.005 r.Shadow.SpotLightTransitionScale=4096 r.Shadow.WholeSceneShadowCacheMb=18000 r.ShadowQuality=3 r.StaticMeshLODDistanceScale=0.3 r.Streaming.Boost=3 r.Streaming.FramesForFullUpdate=1 r.Streaming.LimitPoolSizeToVRAM=0 r.Streaming.MaxTempMemoryAllowed=18000 r.Streaming.NumStaticComponentsProcessedPerFrame=300 r.Streaming.PoolSize=24000 r.ViewDistanceScale=4 s.ContinuouslyIncrementalGCWhileLevelsPendingPurge=0 s.ForceGCAfterLevelStreamedOut=0 s.LevelStreamingComponentsRegistrationGranularity=100 s.LevelStreamingComponentsUnregistrationGranularity=50 TimeOfDaySystem.AutoExposure.ExposureBias=-0.5 TimeOfDaySystem.AutoExposure.SpeedDown=5 TimeOfDaySystem.AutoExposure.SpeedUp=10 TimeOfDaySystem.BloomIntensity=0.17 TimeOfDaySystem.Clouds.HighAltitude.CloudDensityMult=0.3 TimeOfDaySystem.CloudShadowVolumetricResolutionScale=4 TimeofDaySystem.LegacyEmissiveAdjustments.EmissiveMultNonLamp=500 TimeOfDaySystem.SkyLightPollutionLuminance=1.0 TimeOfDaySystem.StarIntensity=50 TimeOfDaySystem.SunIntensity=50000 TimeOfDaySystem.VolumetricCloud.GroundContribution=1 TimeOfDaySystem.VolumetricCloud.LayerHeightScale=1.8 TimeOfDaySystem.VolumetricCloud.RayMarchedShadows=1 r.ShaderPipelineCache.Enabled=1 D3D12.PSO.DiskCache=1 D3D12.PSO.DiskCache=1 D3D12.PSO.DriverOptimizedDiskCache=1 D3D12.PSO.DriverOptimizedDiskCache=1 r.ShaderPipelineCache.BatchSize=100 r.ShaderPipelineCache.PrecompileBatchSize=1 r.ShaderPipelineCache.BackgroundBatchSize=1 niagara.CreateShadersOnLoad=1 r.ForceAllCoresForShaderCompiling=1 r.Shaders.FastMath=1 r.Shaders.Optimize=1 g.TimeToBlockOnRenderFence=0 r.RHICmdBalanceParallelLists=1 fx.ForceCompileOnLoad=1 r.ShaderPipelineCache.BatchTime=1 r.ShaderPipelineCache.PrecompileBatchTime=0 s.AsyncLoadingTimeLimit=1 s.PriorityAsyncLoadingExtraTime=0 s.UnregisterComponentsTimeLimit=1 s.LevelStreamingActorsUpdateTimeLimit=1 s.PriorityLevelStreamingActorsUpdateExtraTime=0 r.SkeletalMeshLODBias=-2 grass.densityScale=2 r.Shadow.CSMShadowDistanceFadeoutMultiplier=0.25 My Rig: AMD Ryzen 9 5900X 12-Core CPU; ASUS Cross Hair VIII Formula Mobo; Win 11 Pro (64 bit); 32GB RAM; 2TB Corsair Force Series MP600 Pro 2TB PCIe Gen 4.0 M.2 NVMe SSD. 1TB SAMSUNG 960 EVO M.2 NVME SSD; MSI GeForce RTX 3090 VENTUS 3X 24G OC; Dell G-Sync 144Hz Monitor.
I’d be interested to see how you find my Ultra preset on it’s own. I’m seeing virtually zero hitching or stuttering & were on very similar setups. I’ve got a new TSW4 focussed ini hopefully to go up today. Just testing it at the moment.
You could have the best hardware on earth, it wouldn't matter if the game is badly optimized. Just look at the PC port of GTA IV. It ran terribly on every PC and still runs bad even on modern hardware. You need mods and fixes to make it run fine.
Hi Will. This is one example of my tests. I hope it helps to get a quick overview. I took some snapshots upon loading the BR185 service starting at Riesa at 6.07h with train at standstill and dynamic weather off to have exactly the same initial situation always. DX12 default vs DX12 modded ini parameters FPS on default settings: 118 (8.47 ms) steady FPS on modded settings: 83 (12.05 ms) steady Impact of modded settings: FPS figure is lower (as expected due to graphical quality increase). But fps figure is still 70% of original one, taking into account that the graphical load was increased by several orders of magnitude. Game thread latency is 2.88 times higher (as expected due to viewdistance increase --> more assets to handle) Render thread latency is only 1.33 times higher GPU latency is only 1.42 times higher RHI thread latency is only 1.48 times higher However the following is the interesting part of the tests. Same service conditions repeated now with default parameters and default DX11, so the stock game conditions. DX11 default ini parameters FPS values on default settings reproduce constant jumps between 88 and 112 Observing the lower range fps figures (88) for the DX11 case we find: Game thread latency is similar to the default DX12 case (as expected due to unchanged viewdistance --> same amount of assets to handle) Render thread latency is 1.08 times higher Render thread latency is 1.44 times higher than DX12 with default ini (7.56 ms vs 5.24 ms) GPU latency is basically the same (less than a 3% difference) RHI latency is 1.66 times higher RHI latency is 2.46 times higher than DX12 with default ini (10.75 ms vs 4.37 ms) Conclusions So those test results mean in plain words the following: Modded settings in DX12 vs default settings in DX12 result in a fps reduction (as expected due to graphical quality increase) but such reduction can be levered lowering graphical quality Modded settings in DX12 vs default settings in DX12 only produce around 1.5 times latency increases in Render, GPU and RHI, despite the intense graphical load. But that latency is still below 10 ms, so it allows very high fps figures Game thread is the one creating the bottlenecks with the modded settings in DX12 as it shows a higher latency, but that´s because we increased by almost 3 times the amount of assets to manage in every cycle after increasing quality on purpose Modded settings in DX12 still produce a fps figure in the range of the lowest one you will get under DX11 in the same scenario, despite the intense graphical load DX12 results in steady fps values both for default and modded settings while DX11 results in unstable fps, with significant drops. Steady fps figures allow to keep workload balanced per frame without delaying the incomming tasks. Required workload can be finished in time per frame and stability is maintained for longer periods of time Modded settings in DX12 vs default settings in DX11 still result in lower threads latency with similar GPU latency, so the GPU is not reproducing any relevant stress While render thread is slower the RHI thread is significantly slower in DX11 compared to DX12. Even the DX12 modded ini settings still give lower latency than DX11 default settings. That´s why the RHI sync method in DX12 behaves much better and still results in steady fps as well even if you dramatically increase the graphical load Remember: my modded settings are set to produce a heavy graphical load on purpose (x3 viewdistance and virtually max possible details) while default settings are always using the normal graphical settings. In my opinion it´s quite clear that default DX11 settings behave even worse than the intense graphics settings in DX12, which are virtually the max possible settings you can set in game. This is self explanatory I would say. But I agree with you that things need to be analyzed carefully and some research is also needed. I was only able to show the stats on screen. With the proper testing and tools you would be able to check those things much better for sure. I think an interesting window of oportunity is open and it I only see benefits there. Cheers
Settings set updated after TSW4 public release patch. Please check the post with colors at the end of first page. Main changes: Frame times increased to 2ms to prevent some audio cuts during hitching Level streaming actors, static objects and garbage collector batch size adjusted D3D12.MaximumFrameLatency added to limit queued frames (DX11 users shall use RHI.MaximumFrameLatency=1 instead) Pipeline cache frame times and batches reduced to minimize impact of on demand PSO compilation (according to Unreal docs compilation can take up to 100ms per element) Distant field shadows setting added with x10 distance (that´s the max effective value; default value is 1). This allows to see distant shadows of objects on terrain with no relevant impact on performance as the default resolution of such shadows is a half Note: distant field shadows alone still allows to render shadows up to the current viewdistance limits. Heightfield shadows (shadows casted by terrain elevations on terrain) are also available in some configs in the forum posts but I didn´t include them in mine for one reason: there´s still an old artifact in engine and the free roam camera object itself casts shadows to terrain if placed in between sun and some terrain areas at the border of map tiles, resulting in big shadow band that moves as you move the free camera. See example at the end of this topic: https://forums.dovetailgames.com/th...ws-and-errors-in-default-configuration.50462/ Cheers
I tried only your Ultra settings a few moments ago (With Nvidia NIS still on). Looks and works just fine, virtually no hitching/stuttering as you also report. There is a slight difference in lighting and colour when compared with mine (to be expected). I removed the "PSO Cache" lines)
Hi. Today I made some interesting findings. We saw in the past that HLODs streaming created hitching so I decided to make some tests and disabling HLOD texture streaming completely (r.Streaming.HLODStrategy=2) together with increasing the HLODs transition scale (r.HLOD.DistanceOverrideScale=10). This is not only removing hitching but has no performance impact at all because HLOD assets, even if they appear at routes, are not usually implemented at large areas but grouping some objects together only. This turns to load all HLOD textures in memory and spawn the group of objects that are packed together in the HLOD at the max detail much earlier and away from player. As some of the other graphical tweaks this one therefore removes the need of LOD/texture Mips transitions. There´s still one minor residual hitching that I can´t explain technically but it has a clear pattern and it´s indeed not dependant on the settings, so you will also see that using default ones. I think that´s the key of the problem indeed and the last one to solve, I would say. The effect is not always the same and that´s why I can´t understand it, so we would need the help of devs to take a look to this. I found this while testing the tile transitions at Riesa. There are two transitions heading Riesa: at MP 75.4 and MP 73.4. They are short after the catenary poles and are easy to spot because there are trees just in the border of the transitions. First one in located exactly at the marked tree in the image below. This one in particular has two effects, depending on the weather conditions, so you would see one or the other: The next tile, just ahead of where locomotive is parked, updates global sun light as it´s loaded (and you notice the hitching), and it can result in a big cloud shadow to appear in tracks ahead of locomotive that was not there before the transition. The two trees to the right of the yellow mark change color and also reproduce new shadows that were not visible short before the new tile is loaded. This shouldn´t happen. While we are still inside the old tile and objects are at a so short distance that´s enough to display them at full detail before the transition happens. Second transition happens more or less at the position of the milepost banner. This is specially interesting because it reproduces the same effect always: new bushes appear and the tree shadows in the yellow mark are also updated, as in the previous case. However some distant bushes were already rendered even before, so those new 3D objects should not spawn suddenly as again you are at a distance short enough to display them at full detail even earlier. Remember that we are using for the tests a configuration that forces max detail to appear even at several hundreds of meters ahead of this position on purpose. It may appear to be a simple bush, but can we be sure that this is not happening to other objects as well at other areas, even ones with heavier polys and textures than this bush and that they create a sudden unexpected load? I can´t at least. Something appears to be wrong here. If you continue driving you will find some more transitions at the next station and the curve afterwards. At the curve those effects are reproduced on the trees next to track right side, exactly where the transition happens. In all cases described above that residual hitching is reproduced, no matter which is are the graphical settings nor the viewdistance. It´s not caused by a train as the first train will appear when you arrive to Riesa station, being one loading passengers there, so it´s still very far to be spawned. Whatever is causing this appears to be related to those trees 3D objects, shadows or lights. If the 3D models have a wrong LOD configuration that could explain why we see hitching in many routes at countryside areas that have basically a couple of houses and some small forests, so where no hitching should happen at all. The updating of shadows and light is the other thing to observe as this could point to some error in the tiles updating process. The residual hitching, even if still very short, is able to pause game and cut the high pitch sounds on the german locomotives for instance, so it seems the threads are simply holding for some reason and it does not appear to be the graphical load because areas with substantial higher scenery density than the ones above and even with several trains inside don´t result in audio cuts while tile loading hitching happens. That audio cut is the other important factor as it clearly indicates a discontinuity in the threads. I updated my config file in the post of the colors and that results in the best hitching free and steady rendering I managed to get in the past years, with still very low frame latency and high fps. Config also includes one new variable (au.DisableParallelSourceProcessing=0) that allows async processing of sound sources and that was disabled when it shouldn´t, as async processing is what indeed helps performance by creating additional workers on the threads. I will continue testing this config on the preserved collection routes but it seems to be the best one I can manage to prepare to be honest. I´m unable to find any way to mitigate that stupid and short hitching effect that still remains and appears to be the key of all this problem. We need the devs support to clarify what´s going on there because if that´s removed or mitagated the game would virtually run completely hitching free. So far HW resources utilization is really good. GPU works at 100% without core clock fluctuations and stable VRAM usage and CPU is steady as if you were simply at Windows desktop. The two peaks there were caused by first time shaders compilation indeed, in a part of the route I had not yet used because I previously deleted shaders for some shaders specific tests. Cheers
That's some impressive highly empirical research geloxo I can only say that on my PC (specs see signature) the GPU has much more reserves on DX12, even with the default Engine.ini on 4K (1080p @ 200% scaling) it's a much smoother experience. What struck me is the huge difference in GPU usage - the same scenario needs almost 90% GPU power in DX11, and only about 20% on DX12 while delivering a stable (capped) framerate of 75 FPS. DX11 DX12 (ignore the zero values) It seems my RTX is actually... bored on DX12 running TSW, so there's a lot of room for .ini improvements.
This is what you would expect. DX11 was released almost 15 years ago. We are talking of the Windows Vista and Windows 7 era. DX12 is also old but DX12 Ultimate version, which is the one your card and mine use, is just 3 years old I think. In addition to the new DX12 version those cards capacity has nothing to do with the old DX11 cards. That´s why it makes no sense at all that game is still using so old implementations. Cheers
I remember the old great Quake3Arena-based games, using a hardware autodetect feature to adjust some engine cvars (by applying config presets) so you'd get a good out-of-the-box experience without needing to further tweak the engine. Or do three .ini presets and just put them in the Advanced Settings under let's say "Performance", Medium, High and Ultra e.g. - you get the point. I'm experimenting with the ini, thanks to GodMode's console I can check the effects in realtime.
Hmmm, that doesn’t sound right. GPU usage should be higher in DX12, not lower. That’s the whole point of 12, that it lifts load off the CPU and puts some of those calculations on to the GPU. That’s what I see when running DX12, strange that you’re not seeing the same thing.
Dumb question but can someone remind me how to activate DX12? Presumably an entry in the Steam command line. Though hope it doesn’t melt my GTX1650… Thanks in advance.
so all of them lines every single one of them u add in engine ini ??? im on 3060gigibyte i7 11770k with 64gb ram and play on hard drive on pc steam
This might also be a dumb question but is the launch parameter case-sensitive? I've seen it given in the forums with the "dx" in both upper and lower case, so just wondered if it mattered and if so, which is correct?
But that's how it is. Probably there are more calculations done on DX12, but DX12 is waaay more efficient in doing these than the ancient DX11, resulting in a drop from 88 to 20 percent usage on my end.
But he is on DX12 still with default settings. The workload for the GPU is very small because basically he is rendering around 1 km ahead only and at 1080p base resolution. Notice VRAM usage is only 4 GB. A 40xx series card does not need to do too much in such situation. As soon as he increases graphical details, viewdistance or textures resolution GPU will start to work harder. Cheers
Exactly, I was deliberately testing on vanilla (Ultra settings), to see (and help prove) how much room there is for .ini modifications on modern systems using DX12. The morphing trees make the game look really bad out of the box, and TSW is being criticised and mocked for that. This thread should be pinned imho, unfortunately there's no dedicated Technical subforum in here.
It´s up to you to use all lines or not. I included some guidelines to tune the settings in the post of the colors for people who see too hard performance hits and still want to test the settings. The absolute minimum to see some improvements would be to use DX12 with r.GTSyncType=1 only, but that won´t remove all cases of hitching for instance and you will still have some intermediate graphical quality with an atrocious view distance as in default game case. Those settings are basically a proof of concept so that we can see that game can work a bit better. If you don´t want to start adjusting the config just stay with the configuration you have if that works fine in your case. That´s better than touching things if you have doubts when adjusting them I would say. Cheers