The IT Department doesn’t Care about your Productivity

The IT department in your medium-to-large organisation doesn’t care about your productivity. In most organizations IT is a cost, not an enabler, or something that gives your organization a competitive advantage, and your organisation is no different. Shelling out more money for better networking hardware, or more storage, or new software licenses has real, measurable costs that the IT department has to front up for, and which they have to pass on. When you have to wait 30 seconds between double-clicking a network folder and the contents displaying on the screen the cost is so small it’s hard to measure, and it isn’t a cost that the IT department pays. Sure, it robs people of their productivity, bringing in to sharp contrast for most of them just how unimportant they are, and invites them to task-switch to something else which has proven detrimental effects on productivity, but that isn’t the IT department’s problem now is it. 

Your productivity is collateral damage in the IT Department’s war with entropy. Flexibility is the enemy. Flexibility leads to change (forward progress is one kind of change, but allowing users to change their desktop background or their system font is another kind of change). Change leads to increased costs – one out of every thousand users who change their desktop background is going to accidentally lock themselves out of their account somehow, and cost the IT department in support calls to the helpdesk (a total misnomer since there is no desk, and no help). In order to try and put a lid on entropy things are closed and locked down. Legitimate sites and MIME types are inadvertently blocked at the corporate firewall, because there is a risk for the IT Department in letting them through, and there is no cost to the IT department for preventing you doing your job. 

This goes double if you’re a developer. The IT department like homogeneity. They like taking away user ‘rights’, and locking things down. They crave order and structure and minimal surface area. As a developer you don’t fit their model. You need access to configure web or database servers, to start and stop services, to install programs, to attach debuggers. You need exactly the same kinds of rights they have themselves, but you lack the fear of change that normally ensures the IT Department never does anything with them. And maybe you actually know what you’re doing. For the IT Department, preventing you doing your job as a developer is critically important, because as a developer you are an agent of change. Every key-press you’re writing a new future for some part of their IT ecosystem. Creating? Enabling? Improving? Maybe – those are subjective, and the IT Department probably won’t reap any of the benefits of those anyway. But changing – yes & ndash; that is not subjective, and that IS a risk/cost the IT Department must bear. Improvement and enabling are not possible without change, and the IT Department is diametrically opposed to change. Since they can’t directly oppose the change you’re carrying out they need to reign it in, and slow it down. Make you work in a virtual machine. Make you work in a virtual machine via VNC. Make you work in a virtual machine hosted in another country via VNC, over a slow link. 

Processes are one of the chief tools the IT Department has to reign in your positive change: Complex change control processes; opaque ‘network tests’ for anything that is deployed; Refusal to roll out supporting componentry; tiny windows for change to occur in; N months of ‘security testing’ before anything is rolled out. It’s fine for the IT Department to cause rolling outages during the working day because someone there can’t be bothered staying back late to deploy a change, because people’s productivity doesn’t matter to the IT Department. But not you, your work can’t be deployed for another 6 weeks because there is a change embargo in place until the end of the holiday season. 

The IT department is driven by that fear. Fear of change. Fear of additional effort. Fear of being taken out of their comfort zone. Fear of being wrong. Fear that they’ll be found out as imposters.  Fear that if they let you look at that proxy configuration, or that network trace you’ll see why that network folder takes 30 seconds to open for no good reason, and they’ll lose their mystique, their authority. And without that how can they retain control?

Of course their fears are somewhat misplaced. Where else are you going to go? It’s not like there are two IT departments vying for your business. It’s their network, and their hardware. Plugging in your own stuff can be grounds for dismissal, as can attempting to circumvent their controls. When two organizations merge the dominant IT Department will fight aggressively to bring things back into equilibrium. There can be only one. All those change control processes they enforced on you get thrown under a bus while the IT Department’s immune response system kicks in. And once again, if your productivity is impacted by this the IT Department just doesn’t care.

»

I use anti-virus….but probably not for the reasons you think

Like many people with a technical background I am deeply sceptical about the effectiveness of anti-virus software. Instead I’ve preferred to rely on an understanding of the risks involved with opening emails purporting to display dancing bunnies and the like, and relied on security principles like least privilege, regular patching etc.  In light of the problems McAffee users around the globe had a month or so ago, who wouldn’t question the costs and benefits of anti-virus? »

The sad (but inevitable) state of .NET Obfuscation

Since the release of the .NET framework there has been a quiet arms-race between those who want to protect their intellectual property, and those who want avail themselves of the intellectual property of others. And from where I’m standing, the reverse-engineers are winning, convincingly. 

 

The CLR is a virtual machine, and for JIT compilation, linking and introspection to work a considerable amount of type information must remain in an assembly when it is compiled.  Because of the extra meta-data (compared to a native C/C++/Delphi etc DLL) if you’re shipping an un-obfuscated assembly to your customers you may as well be giving them the source code. Sure they’ll miss out on the “why” in your comments, but other than that they get pretty much everything.  Tools like reflector just make it that easy to reverse-engineer un-obfuscated .NET code. This leads most people who are developing commercial products built on the .NET framework that are shipped to end-users to investigate obfuscation as a means of pr otecting their IP. Often this is also done to prevent easy circumvention of any licensing checks and restrictions that developers may have put into their .NET products.

Obfuscation can use a number of techniques, such as stripping meta-data, encrypting string resources, scrambling control-of-flow, adding rogue IL instructions that don’t break the program during execution but crash ildasm or reflector, aggressive method overloading (effectively another way to throw out information by removing method names), “packing” the application to hide the IL, and safeguards to prevent debuggers “attaching” as well as probably dozens more that I’m not aware of. And the reverse engineering crowd seem to have counter-measures for every one of these techniques.

Sometimes this takes the form of de-obfuscation tools specifically developed to un-do the work of a particular obfuscator ({smartassasin} for {smartassembly}, “the Xenocode solution” for Xenocode, DotFuckScator for dotfuscator etc). Usually this is a “partial” thing – after all the obfuscation process usually removes information which can’t be recovered, but the de-obfuscator will usually un-encrypt encrypted strings, allow the assembly to be viewed in reflector and un-scramble the control-of-flow obfuscation. No, I haven’t tried all these tools and some of them are now a little out of date, but I’m fairly sure similar ones exist for the newer obfuscation products, and it really doesn’t matter because the general-purpose de-obfuscation tools are even scarier.

And if the thought of obfuscator-product-specific tools isn’t worrying enough, or you were thinking about rolling up your sleeves and writing your own obfuscator have a look at this - a general-purpose re-assembler with de-obfuscation tools built in, demonstrated by Daniel Pistelli, author of the “NativeBlocks” application (currently unreleased, but you can be sure serious reverse-engineers have tools like this or more powerful). By running a few fairly short scripts through his tool he is able to remove a lot of the control-of-flow obfuscation in reflector (before Lutz Roeder gave it in to the care of red-gate/sold it). This is in addition to publicly available tools like Rebel.NET (a re-assembler for .NET, also by Daniel) and simple assembly explorer (a class browser and re-assembler, one of the few reverse-engineering tools I found that came with source…I guess if you know what you’re doing with RCE you don’t need source).

I guess it’s not that surprising that reversing of .NET applications is easy for those who know how. Even native applications that don’t have as many restrictions as .NET applications, and CAN quite happily redact all of their internal structure as they feel like during the compilation process are frequently reverse-engineered using tools like Ollydbg, IDA and Hex-Rays. Ultimately if it runs on a computer outside your control it can be broken. The ethics of all this reverse-engineering seems fairly dubious to me. Aside from the area of malware reverse-engineering I can’t think of many legitimate uses for this stuff, but that doesn’t stop me from being impressed by their ingenuity. 

And before you ask, yes - knowing most of this already I wrote my own obfuscation scheme, only later to be gutted when another “generic” tool that I haven’t linked to here was easily able to circumvent it.

Comments

David Connors
The key sentence in your article is "Ultimately if it runs on a computer outside your control it can be broken." It is not specific to .NET but encompass a lot of time and money wasted on things Office DRM, HDCP, WMA DRM, and the numerous runtimes out there that ‘secure’ binaries from inspection at runtime.

»

Stephen Conroy Thinks You’re Stupid

That is the only logical conclusion I can draw from his recent comments. Either that or he is stupid. Otherwise why would he make ridiculous claims about Google’s recent collection of public WiFi data when doing streetview data collection, saying that “It is possible that this has been the largest privacy breach in history across Western democracies”. Sure…it’s possible. It’s also possible that the senator’s remarks are nothing more than a cheap use of parliamentary privilege to get back at Google for their speaking out about his ridiculous internet filtering plans.
Compared to the petabytes of search and adsense traffic, web crawl data, email and newsgroup postings that Google has been gathering for about a decade, this data set must be truly immense to be of such a privacy concern. Also his claim that the collection of data could not have been inadvertent shows a basic lack of understanding about how software works, and is built. Remember this (Google) is a company obsessed with data – A/B testing to determine which shade of blue in links converts better. So I’m not surprised that “get everything” was the default when someone there wrote some WiFi code for a different project, which was then re-purposed into StreetView. In spite of Eric Schmidt’s comments on privacy Google still has a pretty good track-record – remember Google vs. DoJ? Redaction of faces and care numberplates in StreetView?
Also the notion that Google need your permission to photograph your house is quite amusing. If I print the salacious details of my personal life on a billboard outside my house have Google violated my privacy because they know how to read?

Conroy sez - I've lost the keys to my internet, can you put the word out on the twitter? kthnx

»

Word and WordPad

I was looking at Word 2010 and WordPad (in windows 7) SxS today. There were a few things that occurred to me.

»

Why don’t you try it and find out?

I often hear questions about .NET development and the Windows ecosystem that deal with very specific situations.  “What order do these events fire in?” “Will doing this leak memory?” “Will I see those details in a SQL trace?” “Will that be lazy-loaded?” When you’re a developer you often live and die by these kind of details, and humans are notoriously fallible – so why bother asking the question at all – why not write a simple program and find out?

I often see similar problems in the enterprise – questions that could be answered with a simple prototype, a cursory inspection of the data or the event logs, or a 5-minute discussion with the owner of an external system are left to fester and take on a life of their own. Meetings are called because we’ve got a problem calling system X’s web services. Really – what’s the problem? Well, we don’t know – we haven’t tried calling their web services. Sheesh.

Obviously it’s a judgement call on the part of the developers as to when to answer a question with a simple prototype and when to put up your hand and ask (and of course not all questions can be answered by working code). If the time it takes to ask the question properly (provide all the context, environmental details, sample data etc. to those you’re asking the question) is greater than the time it would take to answer it authoritatively yourself then DIY is a no-brainer. Similarly if you’re making an important decision that will be costly to reverse later then you’d really be foolish to take someone else’s word for it anyway, so you might as well answer the question with working code. Also, if you’re going to be blocked until you receive a response I usually ask the question, and then begin trying to answer it myself with working code (since I’m blocked anyway). Typically I find if you can’t answer the question fairly quickly with working code either you’re not clear about what you’re asking, or your question is too hypothetical (“will this work on MONO on Linux”? – if you don’t have a Linux VM lying around to test this hypothesis either there is a huge gap in your environment “test” matrix, or you’re asking a question that is purely academic).

A single data point doesn’t prove a hypothesis – all swans were believed to be white once, but all you have to do to disprove this theory is to find one black swan. If you’ve answered a question with working code, but are still doubtful go ahead and ask the question, with the evidence you’ve amassed from your code experiments included. People will see that you’re not being lazy and have taken the time to do basic investigation yourself, and hopefully people will only respond if they have something specific to add.

A computer capable of answering the ultimate question

See also: how to ask a question http://catb.org/~esr/faqs/smart-questions.html

Comments

Lb
Very true.

»

Why doesn't feature X exist in Windows?

From time to time I see questions patterned like “why doesn’t Microsoft include X in Windows?” where X is some seemingly basic feature like PDF viewing, anti-virus etc. Surely Microsoft with its armies of programmers and war-chest of capital could buy or build a decent “X”? Well, here's the rub. Remember when Microsoft added a web-browser to Windows? The DoJ slapped an anti-trust suit on them, and the EU turned them into their personal ATM. »

'safety' in numbers for start-ups?

I was listening to aninterview on Mixergy with Ryan Allis, an enterprising young man who built e-mail marketing company iContact into a company with >$3M in monthly sales, all by the tender age of 25. While this is a great accomplishment I couldn’t help but feel a little worried at some of the things Ryan was saying:We very happily, intentionally burn capital every month. And we do that purposely because of what is called, ‘unit economics’. »