Page 2 of 2 FirstFirst 12
Results 11 to 16 of 16

Thread: CPU temperature

  1. #11
    AMD certainly has done great stuff recently. Their integrated graphics seems to be really good. Maybe my next computer will be an AMD, or an ARM. But for now I can't buy yet a new computer. Got to make do with what I have until pressing need for a new one. Typically those pressing needs occur every four years or so. For various reasons.


    I also hear that ARM or perhaps RISC is the future. From my point of view it doesn't matter much as long as they can be built fan-less and team Lazarus supports that architecture. Of course losing access to a lot of old software and probable problems with peripherals is a hassle. I still have an ancient Win 7 laptop only for occsionally using an ancient but fully operational scanner. It also has Minesweeper.


    Your wooden case/ air duct system sounds interesting. Please keep us updated and let us know if/when you pursue that path.


    There are no doubt a plethora of possible solutions to the fan-noise problem out there.


    Any other forum member having some clever solution to share?

  2. #12
    Quote Originally Posted by Jonax View Post
    something Intel calls package temperature.
    That thing is quite treacherous. I was monitoring my laptop for more than a decade with SpeedFan in Win7. There are three temps, ACPI, Intel Core 0 and Intel Core 1. The acpi one follows the core 0 graph closely, a bit more smoothed, with core 1 colder, zigzagging likewise a bit below them. No notable surprises in this behavior in 10 years.

    Then, suddenly, I play Brutal Doom and get a huuge overdraw and fps slowdown: it's Intel integrated gpu and I am guarding a doorway rushed by half the monsters on the map (random map generator sometimes makes things FUN), like 300 spartans with a bfg. Of course the doorway gets splattered over with so many layers of blood decals it brings Intel HD 3000 to its knees.

    But there's the interesting part: I look at the graph and suddenly see this "package temperature" soaring towards 90C, away from the cores steadily zigzagging around 75.
    Conclusion: it's the only sensor that takes gpu into consideration - and that gpu *could* heat up horribly in rare, rare circumstances.
    I'd provide a screenshot (a screnshot is worth a thousand words) but my site's Let'sEncrypt config is broken and I am still too lazy to fix it.

    Real use cases, wild and woolly, always have untold number of surprises in store.

  3. #13
    Quote Originally Posted by Chebmaster View Post
    But there's the interesting part: I look at the graph and suddenly see this "package temperature" soaring towards 90C, away from the cores steadily zigzagging around 75.
    Conclusion: it's the only sensor that takes gpu into consideration - and that gpu *could* heat up horribly in rare, rare circumstances.
    If you call that rare circumstances then I guess you are one of the smarter people that keeps V-Sync enabled all the time.
    There are so many games (both Indy and AAA) that are literally capable of melting integrated GPU's when simply left at main menu since it reaches so high FPS there. High FPS even when doing simple 2D rendering of main menu can still cause High GPU utilization and when you consider that in order to reach such high FPS the CPU also needs to send lots of render calls to the GPU (high utilization of at least one CPU core) you can quickly understand why this can be so dangerous.

    I'd provide a screenshot (a screnshot is worth a thousand words) but my site's Let'sEncrypt config is broken and I am still too lazy to fix it.

    Real use cases, wild and woolly, always have untold number of surprises in store.[/QUOTE]

  4. #14
    Quote Originally Posted by SilverWarior View Post
    I guess you are one of the smarter people that keeps V-Sync enabled all the time.
    Nope.
    HD 3000 is just that wimpy.
    OpenArena (a quake3 clone) only can 60 fps at 1024x768
    Factorio ran ~25fps on minimals, causes an overheat shutdown, stopped playing @this machine
    Team Fortress 2 ran ~30fps on minimals, causes an overheat shutdown, stopped playing @this machine
    So, I only play outdated Zandronum + Brutal Doom here, at 1280x720, at that.

    Note that I managed to heat it to 85C once, by rendering a rotating cube with immense supersampling (more than 4k, AFAIR) downsampled to FullHD in a pixel shader.
    But even then, the package graph never strayed far from the Core 0 graph.

    Hmm... The circumstances were quite specific: tons of overdraw *but* most of those decals were the same texture, quite stretched.
    Maybe the gpu in my case is always limited by how much system memory bandwidth it could scrounge -- but if all textures fit in its own cache it has a chance to go all out and clock up?

  5. #15
    Quote Originally Posted by Chebmaster View Post
    Nope.
    HD 3000 is just that wimpy.
    Can't comment on other games but I do know that Factorio is CPU heavy game that actually supports mutit-hreading and thus can fully utilize both of your CPU cores. Having bigger base in Factorio can cause FPS drop even on my Acer Aspire 6530 laptop with 2 Core AMD Athlon processor since it maxes out both CPU cores. At the same time dedicated ATI Mobility Radeon HD 3200 graphics card never even reaches toward full utilization. And yes this is laptop that is actually capable of running original Crysis albeit on minimal settings.

    Any way when was the last time you tried playing Factorio on your laptop?
    Factoprio developers managed to do some huge optimizations of the game before its full release like optimized belt simulation, optimized robotic path-finding and simulation, massively optimized biter path-finding, redesigned pipe fluid simulation, masive path-finding, collision detection and overall simulation of trains and much more.
    They even managed to make some improvements of rendering pipeline and thus improving performance even on low end computers with integrated graphic cards.

    Any way in order to see if your computer is CPU or GPU bound I would recommend using tools like Process Explorer (https://learn.microsoft.com/en-us/sy...ocess-explorer) that can monitor your system utilization in real time. This was most used application on my laptop since it allowed me to track down any background processed that could hog down my computer. Not to mention that with its help I managed to adjust graphics settings of individual games to the point where neither my CPU or GPU would be fully utilized.

  6. #16
    I had no idea what the 'package temperature' was. Nice to have an idea about that now.


    I can add that what I call CPU temperature is actually from Core 1. I got 4 cores but so far they've all shown the same value when I check so I simplified things by just logging Core 1.


    CPU vs GPU constrained:


    For my part the GPU is mostly not a problem. I don't play modern games and the games I make are 2D and not very demanding for the hardware.


    Most of the time the integrated graphics is enough for me. First time I noticed a GPU limit at all was like 5 years ago when the monitor crashed and I bought a larger one. Then the old J1900 integrated graphics turned out to be inadequate for watching movies. That was one big motivation to buy a newer PC and then I got a J 5005 pentium and just recently also the latest Pentium dedicated for Linux.


    My programming project. I'm dabbling with yet a Linux mini game with short playtime and low hardware demands. Working title:


    "Vesta forever"


    I hope to be able to present a beta any week now. Then looking forward to feedback from the community for further polishing..

Page 2 of 2 FirstFirst 12

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •