This article about vivaldi is a lie



  • http://www.digitaltrends.com/computing/vivaldi-web-browser-launch/ in the article it says the vivaldi browser used only 62.2MB of ram with the websites homepage loaded, while Safari occupied the most RAM at 355.1MB, while Firefox and Chrome took up 254.7MB and 188.8MB, respectively. yet when i open vivaldi & it loads the speed dial page without anything else, i'm seeing ram usage of 240MB



  • It seems to me that the author only looked at one of the several Vivaldi processes in making his memory determination. I've never seen any chromium browser, Vivaldi included, use that low of RAM.



  • well that would explain it.
    avant is a browser i've used before that uses the least amount of memory, but it has the same problem all other browsers do - it won't release all the memory unless you closed the browser entirely.

    i've got 4GB of ram & i find it hard to believe the industry is making it out to be the new bare minimum considering it has been more than enough for such a long time.
    i'd simply go out & buy more ram, but the cost of ddr2 ram is much higher compared to ddr3 or ddr4 (expectations of the price lowering haven't been met, considering 4 individual sticks amounting to 4GB costs more than a single stick of 4GB ddr3)

    my mother has windows 10 with either 1GB or 2GB of ram (i don't remember, i think it's 2GB) & she hasn't had a single problem.
    (though her version is 32bit while i'm using 64bit windows 7)
    i've used the edge browser on windows 10 & it seems to be the fastest browser i've ever used, though i've used a browser or two on windows 7 that was comparably fast, edge is supposed to be the newest & i didn't need to tweak anything or settle for less.

    i think the problem with avant was that it didn't use hardware acceleration for youtube videos & i don't think i used it long enough to learn if there was an ad blocker (it seems the ads significantly slow down the page loading & i've got 10mbit/s internet while the ads used to work better way back when i had 3mbit/s internet)
    plus my processor is at least twice as good as it was back then, seems embarrassing & irresponsible.

    for some reason vivaldi has slowed down some since i first installed it, i think i might go back to firefox or try avant again.
    i can't see any reason to stay with vivaldi when chrome is faster.. and i think firefox uses less ram or i was happier with the 60 frames per second youtube video performance (because chrome wasn't using hardware acceleration while firefox was).
    neither one of them will play 1080p 60fps youtube videos flawlessly - yet chrome was said to be the first browser that could do it.
    they better get their act together considering 4k is right around the corner.



  • @anwaypasible:

    … i've got 4GB of ram & i find it hard to believe the industry is making it out to be the new bare minimum considering it has been more than enough for such a long time. ... my mother has windows 10 with either 1GB or 2GB of ram (i don't remember, i think it's 2GB) & she hasn't had a single problem. (though her version is 32bit while i'm using 64bit windows 7)...

    The problem is that software size and CPU demands tend to grow as fast as prevailing new hardware designs make RAM-space/speed available off-the-shelf. While an OS may be advertised to run at some given low RAM level, that doesn't usually play out in a typical installation. I run Win7-64 on a system with 8 Gb RAM and it routinely sits at 2.3 GB RAM consumption with just the OS, an AV, and a handful of minor, low-impact taskbar tools/processes/services running. With 32-bit Vivaldi running just one tab, that consumption ramps up to 3 GB after an hour or so of active browsing (as the history and cache files grow). That seems to be the nature of newer multi-process browsers. As you start to approach your RAM limits, the OS and browser will start swapping RAM with disk virtual memory (a lot slower). Also, keep in mind that a 64-bit OS will consume double the RAM of a 32-bit OS, just by the nature of the design math - hence systems rated for 64-bit OS's will typically come with twice the RAM of those rated for 32-bit OSs… so direct RAM consumption comparisons between the two types can be tricky.

    If Vivaldi has slowed down, it may be worthwhile paring down the Topsites and Topsites Journal files in your Profile folder by deleting them (with Vivaldi turned off). Vivaldi will create new ones, but they'll be a whole lot smaller until they gradually expand with continued browsing. I've gotten in the habit of dumping them every week or so, just to keep them from over-growing.

    A single-process browser like Firefox (or Olde Opera) will use less RAM than a multi-process design like Chrome, nu-Opera, Vivaldi, etc, just by the nature of the design and the principles of efficiency. That doesn't mean the single-process designs will handle multiple complex pages simultaneously as well or be as tolerant of an extension or tab crash as a multiprocess design. There are all kinds of tradeoffs at work, just as with most things digital.

    In any case, the demands of software upon hardware will only grow with time, and a system that is typical of the specs of off-the-shelf units 5 or more years earlier will always be significantly stressed by newer software of any complexity - and browsers are now one of the most complex kinds of application software that can be run by an average user. I've run small computers since their infancy (as mini-computers) in the 1970's, and this 5-year age observation has held true all the way along.



  • working within the frame that is given..
    first of all, most people will easily see how either the 1's and 0's are twice as many or twice as long comparing 32bit with 64bit.
    because even if your math results are FIR, the space occupied by those markings continue to require the same amount (unless you are stamping those markings in such a way that notes their graphed location - thus freeing up storage cells around them; but that requires storage cells strong enough to hold them & keep them from distorting eachother - & that also requires components on the circuit board that can keep those particles in line without touching or distorting eachother)

    but here is the thing that other people are going to agree with too..
    software 5-10 years ago was supposed to be secure. as time goes on & security becomes a question, all they needed to do was change the password of their software.
    if the operating system doesn't get it's passwords changed, then people will hack at it until they can use the operating system itself to reveal the passwords of each individual software.
    but that is a fault of the operating system (and not receiving password updates) rather than the fault of the software passwords (because those passwords would be had whether they changed them or not).

    HTML5 is newer, thus it is expected to be longer (AT LEAST) even if the bit depth doesn't change.
    but there are some things that simply aren't newer, such as the text .. the pictures of 10 bit per channel (30 bit 'deep color') at most, and the audio that hasn't necessarily changed sample rate (even though compression is specifically bit depth related causing the bit rate to change since the original bit depth is lowered in some or all places thus changing the size of the audio stream for every single sample)

    with that said, it isn't necessarily fair to include audio into the equation because there's been audio improvements on websites such as youtube, right?
    yet websites such as pandora haven't seemed to change over the last 5 years or more.
    there are other online radio stations streaming the same 128kbit through 320kbit that they have been for 5+ years.

    technically, HTML5 simply changes the glove that catches the ball - thus a single process browser would see more ram usage opening the browser with a blank page & the 'receiving window' loaded.
    multi-process browsers might load that new glove for each tab, thus increasing the load in ram significantly.
    but as i said, the ball & the field & the bat & perhaps even the players are all the same as it is still the same game.
    TCP internet protocol hasn't changed, but the content in the packets might of changed - specifically, more packets necessary today compared to the number of packets necessary for the same thing 5+ years ago.
    but that is because of HTML5 with all the new things it does, and i don't know the specifics but i assume there are new things for the way the layout of the webpage is.

    though it needs to be known, they haven't increased security of the information as it is being sent to and|or from the internet.
    all they did was change the way websites hold their layout information hoping it would stop hackers from abusing the voids in the programming as a whole .. yet we continue to hear about people being able to hack using a website & that means the programming as a whole isn't perfect & that means we are dealing with idiots.
    a perfect assembly has zero voids, & then from there something must be done for people trying to look for those voids; as well as something must be done for people who use code that simply isn't required for browsing the internet (or whatever other software that is running, because those software programs could use multiple tables of reference that tell what is & isn't required, thus if anybody does anything that doesn't fit those tables, they'd be had).

    when your programming assembly is perfect, next comes people who sit there & stare at it, reading it until the entire compiler has been bled out & they've basically got a translation table that can be used to decode any sensitive information they want.
    that is why it is vitally important to update the operating system with new passwords before anybody can bleed out the compiler creating a translation table .. because if the translation table isn't complete, then they don't really know what goes where.
    if you simply change something that they've seen done a certain way repeatedly in the past, they are going to note that change & know the password was changed; but they will also know they can simply execute the program & watch to see if the password is the same or if it is new (that way they know if their translation table is 'current' & 'accurate' or if they need to sit down & stare at the operating system working some more.
    obviously, with that said.. the operating system would be improved if it could randomize the execution to prevent a person from simply sitting down & executing the program while watching the code once.
    the more randomization = the longer it takes them to bleed the compiler fully.

    i would assume they could guess the last small portion of the table & that means there is no waiting for updates.
    if they can bleed the entire compiler, then they'd learn access to the kernel & if they get access to the kernel, they could take total control of the computer or use the hardware of the computer to do illicit processing for them as if simply scaling processing power such as nvidia's sli or amd's crossfire.
    because password information would already be decoded & lives could be stressed to the extent of ruin.
    if they can access the kernel, they can also do annoying things such as popups or locking the computer holding it for ransom or removing keyboard or mouse input too.
    thus yes, they'd also be able to see what you are typing, or access your microphone or webcam, or see the shapes you draw with the mouse, or listen to the audio you listen to, or watch the videos you watch.
    if they get the kernel, they'd be able to browse through your files too.

    the saddest part is, all of the above is consumer level & without any single bit of spy technology.

    back to what i was saying,
    the length of passwords needs to change why? if it was good before, simply changing the password should be good to go again.
    the only thing that can break that is an operating system that hasn't had it's passwords changed to prevent people from getting to the software passwords.

    you say software will only grow with complexity as time goes on.
    i'm not going to say that isn't true, but i'm also not going to say it is necessary for all software.
    let's use a video game for example, because i've played some video games that ran without a single problem in the functionality (despite the players perhaps).
    whatever bit depth it took to create a character being able to walk around without going into an object, without falling through the floor, keeping the tags of the bullets functioning, being able to jump, being able to look around.. none of that really needs to change .. though the passwords that protect those things might need to change.
    because from there, the developers are simply adding on top of it.
    for example, if they put a simple cube on the floor that you could walk up to without going inside, without skipping or juddering while walking, being able to jump on top of it without falling through or juddering - then really all they are doing is increasing the vertices & thus the lines that connect those vertices.
    if you can put a picture on the walls of the cube, they are simply doing the same thing with more pixels & higher bit depth of the colors.
    lights? a basic simple light that is totally on with an edge that is totally off is the beginning foundation.
    from there, they are doing nothing more than applying a filter that changes the edge of the light.
    and after they've used up all the filters on the edges of the lights (such as the edge of light on the floor), then they go opposite (from the floor to the bulb of the light itself) and the filter applies to everything spread out.
    yet from the beginning of lights that were rendered in real time, there has always been some sort of spreading out from the bulb (because before that they simply burned the light into the textures (pictures) because they didn't move & there wasn't a shadow going on top of them).
    atmosphere effects are really quite the same as a light filter, except the light has a single point source while the atmosphere is flipped exactly opposite (then the location of the source doesn't exist anymore either).
    atmosphere effects are always a manipulation of pixels one way or another, it could be simply blurring the pixels or adding something on top of the pixels.. and once you start adding on top of the pixels, such as fog, then you might as well be able to do rain drops or snowflakes (but the caveat here is the fact that fog can be a 2D visual while rain & snowflakes need to be 3D for realism while moving as a static 2D image that simply stays in place will cause the raindrops or snowflakes to move with the movement of the character's view.
    but suffice to say, that is about it - unless you get into being able to shoot through something, or bullets that can ricochet, or objects that can be destroyed (but these are simply 4D animations, because if they were 3D there wouldn't be any pictures inside the object when they explode or crumble to the ground).
    but if you've made it this far, then yes, you should easily be able to see how the amount of code that needs to be processed gets longer & longer.
    simply increasing the resolution of the textures on objects requires longer & longer processing of the graphics card.
    you might be using a specific bit depth for the texture pictures, but if all those colors don't exist on the texture - any additional colors is technically adding length to the processing (that's like increasing the bit rate within the frame of a locked bit depth).
    increasing the bit depth of the textures is more additional length to the processing - but they said direct x already does 10 bit per sample color (dunno if it came in direct x 9, or 10 - but i know the gtx260 was said to be able to do 10 bit per sample color with direct x & it's highest version is 10).

    funny how simple textures such as the ones used back in f.e.a.r. combat could look somewhat realistic & that free to play game was released 10 years ago.
    then there's the water of the ocean in the beginning of bioshock that was stunning enough to make a person's soul (or conscious) re-seat.. and that was a static animation, not a floating rendered process.

    games nowadays waste A LOT of processing power on the size of the maps, not only the size of the maps but also the view distance too.
    i assume the hardware available today could re-create a carnival game that looks real enough to trick the brain into thinking it is real (if the monitor's colors could keep up).. and i'm talking real like the desire to walk into, fall into, jump into, reach out for the screen & bump it with your hand because you couldn't tell where the physical screen is.

    another problem with games today is their choice of colors, there's the hundreds of games (perhaps thousands) where one color from the next is simply too identical - causing a scenario that is hard to see.
    technically, if you think back to atari - those games were super high contrast because there'd be a solid color on top of black.
    the first nintendo is another good example, because the solid colors of 8 bit made it easy to see one color from the next.
    then everything started to go downhill with 16 bit games.
    it didn't matter if it was the sega genesis or arcade games, because they both suffered from it.
    some games are worse than others, but it is usually a situation comparable to a 'white out' from a blizzard - except simply choose your color.
    i'm not fooled by the other games that keep colors quite the same & then use light reflections of a vastly different color to add reflections as if the scene is thus high contrast.

    another big achievement in video game history was battlefield 3 .. the graphics in the subway specifically.
    they were the most realistic graphics available at the time, and that game came out almost 5 years ago.
    the graphics processing power has at least doubled since then, thus the same map with twice as much bit depth of color would thus make it look twice as real.
    it gave the realism of f.e.a.r. combat a run for the top spot.

    there was one more thing that really caught my eye & that was the fat belching gun called 'auto cannon' in the video game prey that was released 10 years ago.
    holding that gun had enough depth that made me want to lean into the screen simply because there was room to do it.

    well, the drill in the first bioshock also had enough realism & depth that made me think it was meat.

    i could summarize the game portion down to, what good is the extra processing power if they waste|abuse it?
    all of the processors are supposed to have a bit depth limit, and if your math equations can't be fully resolved within the bit depth limits, then they will simply need to be longer/more equations - thus more computing necessary.

    however, i think when you said 5 year old computer hardware will be significantly stressed by new software of any complexity - well that is totally false.
    there are still some core2quad processors from 9 years ago that can do anything except new generation graphics & those setups are expected to continue being enough for another 5-10 years.
    all you need is enough ram & a graphics card that can decode vp9 - because that'll get you up to 4k decoding (though it isn't 120fps & it might not be 60fps either, even though it should be).
    vp9 is used for 60fps 1080p videos on youtube, as well as their 4k videos - but i don't know what their 4k frame per second is.
    i would expect them to continue using vp9 for 60fps 4k videos at least - then later on down the road televisions will have more than 4k video & they'll either need a new codec or upgrade the old one (but they said they are working on vp10 - and if they can put vp9 decoding & encoding on the graphics card, they can put vp10 on the graphics cards too).
    besides, people expect a graphics card to do graphics processing .. people won't ever go out there because of incentive if the graphics card does nothing but text & pictures & video games.
    it's a slap in the face to see expensive graphics cards for sale that can't use their processing power for video decoding & encoding - especially considering they've got those processing cores that allows other random software to run on the graphics card, people would cry foul claiming it isn't fair simply because it truly isn't.
    graphics processors are always supposed to process graphics, but being able to run other software on those graphics processors is seen as a plus positive for the industry of computing as well as the industry of graphics cards.
    but if they decide to slow down or back out, then the processing power from a graphics card is going to go the way of pci-e ssd cards and instead of memory on those cards it is going to be additional general processing.
    why would they invest in such a device? because they know once they ever upgrade the rest of the hardware they've still got that additional processing power that can be added to it.
    it happens already, somebody buys a graphics card with old cpu & ram and the game doesn't run as good as they want it to & they know getting a faster graphics card isn't going to help because the cpu is too slow, then they go out and get a new motherboard, new cpu, new ram, and install the graphics card in the system & then the game runs as good as necessary.

    the only nvidia graphics cards that will decode & encode vp9 are the gtx950 & gtx960 .. even the new gtx titan x only does partial decoding without any decoding (it's a hyrbid between gpu & cpu).
    the intel skylake processors only do partial decoding too (between the on chip graphics card & the cpu).
    when a webcam has an onboard encoder without software to configure the encoder, people might rather go without the onboard encoder and try the cpu or the graphics card.. for reasons such as changing the resolution, or changing the format, or changing the bit rate to match their internet upload speed, or as far as applying filters to the video on the fly.
    removing the background seems to be a popular filter used with webcams when you see people live streaming themselves playing a video game.
    then there's always kids who are willing to play with filters, but those filters desperately need updated quality.
    they've gone the way of old hard drives from the 1950's.
    poor motion tracking, poor re-sizing, obnoxiously poor colors too.
    some bioshock water for when they are sad might help :silly:


Log in to reply
 

Looks like your connection to Vivaldi Forum was lost, please wait while we try to reconnect.