Spy Code on Linux

  • I have a question regarding Firefox used on Linux. If there is spy code as part of the browser, does it have all the same permisssions I have, Can it copy my data out to somewhere or does Linux not allow it. If its true, is there some way to protect myself against spying?

  • Not sure what that has to do with Firefox as such … unlike Chrome, Firefox is open source and anyone can see what is in it.

    As for the question, it is too broad for me to answer. Do you really have data on your system used outside your browser that you're worried about the browser seeing? That is to say, most people would be worried about their banking information and other stuff that they use inside their browser more than their system information. But if you did have some information outside the browser you were that worried about ... the answer varies by version of Linux. Some block programs from accessing other data and some do not.

  • Ok, what I don't want is for a spy program to make copies of my spreadsheets and doc files and send them to somebody. I know Firefox is open source but some of the code it executes is resident on a server somewhere and you can't access it to see it.

    So, exactly how secure is Linux anybody know?

  • Decades ago, another life far away, I participated as a US government person to do some programming, and we used Linux on a modified Apple system in hopes of being really secure. Even by the mid 90's, we were not all that secure, it turns out.

    If a gifted programmer wants to invade your private data, he or she will be able to do so if you access the Internet via any hardware and software combination known. That seems to be one of the lessons learned from the Snowden disclosures. Truly brilliant people work for large governments worldwide whose only job is to learn about hardware, mid-ware, and software vulnerabilities. Then, when they know, they act to take advantage of those vulnerabilities, regardless of local and national laws saying they can't do that. The players in that club include maybe a dozen governments plus the servant corporations who lick their jack boots, and the rest of the world is at their mercy.

    Long and short message: Keep a low tech set of books off line, in a safe room with an aggressive burn protocol, if you need to be secure. Otherwise, expect someone to know all about you if they set their minds and hearts to the task.


  • Along with the burn protocol, you can use an "air gap":


  • Love your blog! Thanks for the link.

    When I say 'off line' I intend to imply the air gap on a stand alone system.

  • "aggressive burn protocol"

    I enjoyed those words! I might have to start using them. lol. The more I thought about it, the funnier it got, but it's actually true!

  • My favorite term is 'low tech.' Like your 1 GB thumbs, there are loads of extremely low tech solutions to an off line system. But they are a pain in the arse to use.

  • How secure can you be? First you need to know your opponent. Being secure against the 14 year old next door is probably much easier than a government.

    Someone said "paranoia is merely heightened awareness" and in some cases, this is true. I'm trying to live my life with only reasonable fear.

  • @Meek:

    How secure can you be? First you need to know your opponent. Being secure against the 14 year old next door is probably much easier than a government.

    Someone said "paranoia is merely heightened awareness" and in some cases, this is true. I'm trying to live my life with only reasonable fear.

    The important question is how much is your information worth?
    Wealthy pranksters aside, nobody is willing to spend more money on getting the information you want to keep private than what it is worth to them. This even applies to repressive governments, but knowing that you do not plan to overthrow them could possibly be worth a lot. Knowing that you do plan a coup is obviously immensely valuable.

  • @sgunhouse:

    Firefox is open source and anyone can see what is in it.

    Firefox and Linux both. However, this only buys you so much. Even if you have all the source code for all the software on your system, (have the competence necessary to) review that code yourself and build it all from source, you can't be perfectly sure.

    For one thing, the hardware and firmware on which you're running the software came from a corporation that may have been leant on by a government to include "features" you don't know about. The NSA are known to have done exactly that.

    For another thing, there's a trick (pointed out long ago by one of the early C/Unix developers) that can subvert your building of "trusted" software from known sources that you have inspected. The trick as originally described went as follows: write a cleverly adapted version of the C compiler that shall, when run, recognise when the code it's compiling is either a C compiler or a login program; in either case, have the compiler generate code (not corresponding to anything visible in the source it's compiling) that adds your special features to the program being compiled, which otherwise behaves exactly as you would expect given the provided source code.

    Compile binaries of your hacked compiler. Snip out the parts of your source that add the special features, compile the result with the earlier binary. Your new binary still contains your hacks, but is compiled from source code that lacks it. Publish your pre-built binary and your cleaned-up source. Anyone can verify that compiling that source with this binary produces this binary.

    Now, someone concerned about security uses your compiler to compile a C compiler: they know the source is honest and decent, so they trust it, but the binary they get from your compiler actually contains your compiler's hacks, along with all the clever features they expected in their compiler. It's traditional to use the first build of your compiler to recompile the compiler from scratch, if it was originally built with some other compiler; but that won't actually make any difference to your hacks (although it might produce a different binary, not because of your hacks but because of the different compiler used). If they do a third re-compile, with the second-stage result, they'll get something identical to their second-stage result and feel deeply secure in their knowledge that they can trust their compiler, because they inspected the source. All the same, the binary has your hacks in it.

    Now, when they come to compile the login program, your hacks to the compiler recognise that's what it is and add the back door that lets you log in as the administrator, even though there's nothing in the source code being compiled that says anything about that.

    Obviously, you can do the same for all manner of other software, not just the login program; equally obviously, it can be done for any language; that's not really the point. What matters is that a compiled binary can contain features that aren't visible in the source from which it's compiled, if the compiler (binary) contains hacks to insert those features; and this goes as much for the compiler as for anything else.

    I doubt this has actually been done (the bit where your hacks recognise that what's being compiled is a C compiler, or a login program, would be rather tricky); but the point is mostly a thought experiment. It's possible, albeit actually rather hard to do.

    (There are several radically different approaches one can take to the design of a compiler; any program that would recognise all of them would be in danger of mistaking some other programs for C compilers. There are diverse ways for a C compiler to generate final object code; for your hacks to work with all of them, on CPUs of all architectures, they'd need to happen at some more abstract layer in the process of compiling, but then there'd be a risk that some other part of a compiler that someone else wrote (but your compiler hacked when first compiling) would produce a diagnostic about part of your code, that might alert users to the presence of your hacks.)

    In practice, of course, most computer users don't even build their software from scratch; they download pre-built binaries from a public server that might have been compromised. Their supplier may have a package-signing infrastructure in place; but that merely means the attacker has to compromise the key-signing infrastructure or (at least for users who've never taken part in a key-signing party) the key-distribution channel.

    There are a ways round this: for example, write (or pick, e.g. forth) a very very minimal language and hand-code object code that does an adequate job of compiling that language; now write a C compiler in that language and use your hand-coded object code to compiler a C compiler that you can then use to compile a good C compiler from its C sources that you've inspected. Or you could write and build an intercal compiler in C, a ForTran compiler in intercal, an Algol compiler in Fortran, a Haskel compiler in Algol, a lisp interpreter in Haskel, a python engine in lisp, a perl engine in python, a Java platform in perl and a C compiler in Java. If the original C compiler contained hacks that'll reach all the way round that chain successfully, its author deserves a medal (for that matter, so do you, having written all those compilers in diverse languages).

    One way or another - yay, you got round the problem. Unless … did I already mention that you might be unwise to trust your firmware and hardware ? I believe I did.

    Perhaps you could build your own computer hardware from scratch (no bought-in silicon chips; either etch your own or build it out of hand-made thermionic valves) and then use that to run diagnostic checks on a really simple modern processor and its firmware. Then build enough software on that to use it to run diagnostic checks on a real machine. I guess you'd be secure then. Maybe. If you didn't miss anything.

    In practice, however, that would all be expensive. Spending that much time and money on your security would sacrifice a bunch of opportunities that might actually be worth more to you than the improvements in security you get as compared to just trusting that one of the better-run Linux distributions is sound. As a rule of thumb, if the distribution comes from inside a corporation that has a lovely professional facade and tells you to trust it, I'll trust it less than if it comes from a community of volunteers whose internecine squabbling (like the recent Debian spat over init) ensures that they're always watching each other like hawks, which makes it much harder for anyone nefarious to slip something by them.

    Security isn't a meaningful concept without first asking: what you're trying to secure, against what kinds of attacker; how much is it worth to the attacker to get in and how much is it going to cost you to keep them out; and is there some way to side-step the problem ? If you ignore those questions, you can end up spending unbounded amounts of time and money to gain negligible benefits.

    If what you're securing isn't worth much to the attacker, the attacker won't spend much on trying to get at it - unless they're crazy (never ignore that possibility, particularly with governments). If you're spending more on securing yourself against attack than the value (to you) of what you're securing, you're crazy. In fact, you're paranoid: the word doesn't mean "attentive to risks" it means you're paying excessive attention to certain risks. That distracts you from other things, to your own detriment.

  • Sorry, I did not read your entire message but websites can execute code that resides in a file on their server. So, even if you have open source, you won't be able to read the program on the server. So, open source is good but not 100%.

  • Listen, I read the article at the link but he is only considering external attacks from other computers. That's less an issue than code built into windows that searhes your drive for the latest files, compresses them and sends them out somewhere. I recall walking away from my computer one day for about an hour, when I came back I found my hard drive lamp blinking and the wireless lamp blinking and the modem lamp all blinking synchronously. I disabled all automatic updates and I had no virus software running. There should be no reason for this computer to access internet by itself. The spy code is part of windows don't you see? All those updates it keeps doing light up the hard drive in the same way. How do you protect against that? Under windows you can't but with Linux you might have few options.

  • @eddy:

    (…) they'll get something identical to their second-stage result and feel deeply secure in their knowledge that they can trust their compiler, because they inspected the source. All the same, the binary has your hacks in it.

    That is the reason why the really paranoid groups disassemble the code to check it in the same way as they would check a piece of malware and testing it on test systems, fully identical to the production systems, before it goes life, thus mitigating the faulty compiler problem.

    We had some customers doing that in the company I worked before …

  • When you get right down to it, can you really trust anything / anybody?

    I trust my friends, co-workers, acquaintances, computer and software, until I find that I shouldn't.

    Any OS is an amalgam of bits and pieces composed by a multitude of sources at (perhaps) different times. Some crew then puts them all together to make them work seamlessly.
    They are 'probably' just fine and trustworthy but who really knows.
    If software is developed by information predators, they are not going to make it easy for anyone to find that out.
    I am certainly not qualified to analyze every bit of my OS or the software I use (nor do I have the time), though like many others I do monitor what my computer is doing when I see what I think is unusual activity.

  • Run the Mozilla Ad blocker plus and NoScript add-ons, and GNU/Linux with Firefox or Seamonkey is likely as casually safe as a home user could expect.

  • If you so affraid of stealing you data you can always use this:


    Create partition or container. Cypher it. Use strong password. Store data. If that is not good enough - mount partition only when computer is disconnected from internet and partition it self is created on seperate mobile hard drive or pendrive 😛

    If you feel paranoic - mount partition only from LiveCD 😉

  • Good advice

  • I was wondering if the Linux run levels could be employed to restrict access to directories not normally used when on the internet.

  • Run levels? You plan to log out and change run levels just so you can access said directories? There are several better ways …


Looks like your connection to Vivaldi Forum was lost, please wait while we try to reconnect.