Books A Million is a scam

Tried to find a copy of The DevOps Handbook by Gene Kim in a non-kindle format since I want to read on a couple of devices that don’t have good kindle support. I found a link to booksamillion.com on the publisher’s website. BAM advertises the book’s availability in ePUB format right in the product description. Unbeknownst to be, they don’t actually have it available in ePUB format, which BAM apparently believes is just a term-of-art for any electronically readable book. The actual format is the propriety DRM’d Adobe Editions ACSM format, which you don’t get to find out until after you’ve paid and can’t get your money back. I tried the Adobe DE app just because I wanted to get started reading and not screwing around with customer service, but after logging into it with my BAM credentials, it got stuck forever at the download screen and crashed over and over again.

Next, trying to get money back since I can’t even read it in their preferred (required?) proprietary reader. Customer services has to escalate to the book vendor, “Overdrive”. WTF? Who is that? How is BAM not the vendor? Apparently I can’t get my money back…

Jesse
How do I get my money back?
The site says you sell books in epub format, but I can only download in some DRM-encumbered ACSM format
this is clearly deceptive
I tried adobe digital editions on my PC…it lets me authorize the machine, but it refuses to finish downloading or opening the book
it’s just been stuck on “fulfilling the devops handbook” forever and won’t load
Whitney joined the chat
Whitney
Hi, Jesse I will be happy to help you with that.  =)
May I have the order number please?
Jesse
<ID#>
Whitney
I do apologize, Jesse. I will have to escalate this to the vendor
Jesse
? who is the vendor?
I want the book in the advertised format, epub
Whitney
Overdrive is the vendor of this eBook
Jesse
if you don’t actually sell that, then please give me my money back
Whitney
In the meantime, please try following these instructions? To access your eBook in Windows 10, you will first need to download Adobe Digital Editions for Windows here: http://www.adobe.com/solutions/ebook/digital-editions/download.html.
Next, you will open Adobe Digital Editions and click “Help”. Select “Authorize Computer”. You will then select “Book-A-Million” from the eBook Vendor drop-down. The Vendor Login ID and Vendor password will be the same as the username and password you used to login to booksamillion.com when you made your purchase. Click “Authorize”. Click “Ok”. Note: Do not select “I want to authorize my computer without an ID.
To download your eBook, go to booksamillion.com and click “My Account” on the top right to login to your account. Go to Download Library and click “Download”. Select “Open with” and choose Adobe Digital Editions from the dropdown. Click “Ok”.
Please let us know if we can be of further assistance or you can visit our online Help Desk at http://www.booksamillion.com/help/index.html?id=<id&gt;.
Jesse
already did that
so you’re saying I can’t get my money back?
Whitney
I’m saying I will have to escalate to the eBook vendor. They will make that decision
Fuck these guys. I guess Amazon Kindle’s walled garden is the best we can hope for to read a book if nobody else can get their shit together. Ugh.
Advertisements
Posted in Uncategorized | Leave a comment

Surface Book Update 4-19-16: Still Sucks

There was a HUGE firmware and driver update on April 19, 2016 that included a dock integration update, which appears to have fixed at least a few of my multi-monitor issues. I’m not sure if they’re fully resolved yet, but after an initial display dance at work (see last post) my display dock at work seems to remember the last configuration I was using whenever I connect–a first! I have a slightly different setup at home with a different dock, but we’re painting this weekend so I can’t get to my desk to see if all of my multi-monitor problems are resolved.

My last post was all about how the dock’s display issues are awful, so why does my Surface Book still suck? Well, since the update connected sleep has returned to an absolutely unreliable state, and I’ve had to disable it in the registry. Again.

The first two few days after the update  when started my SB at work after an overnight hibernate, it would boot from scratch as though it had crashed during the night. I’d had the machine configured to sleep after 20 mins and hibernate after 2hrs on battery since a previous firmware update had made sleep reliable enough to use. I thought maybe MS had broken hibernate this time, so I disabled it. No, it’s connected sleep. Again. ARGH!!

To rule out hibernate, I configured it to just sleep (no hibernate) while on battery after either 20 minutes or when I close the lid. I have wireless disabled during sleep in the power settings app. Since the update if it sleeps too long it just shuts off even if hours and hours of battery remain. I’ll open the lid expecting it to wake up and the power will be completely off. When I press power it boots from scratch and I’ve lost all my work. Awesome!

After each of these shutoff events when I go into the event viewer I see an Event ID 41 / Kernel Power entry during the boot sequence noting that the machine didn’t shut down cleanly. Before that there is almost always a series of 506 and 507 events from hours earlier from Kernel Power noting it’s entering and exiting connecting standby repeatedly due to “User Display Burst”. As far as I can tell through correlation, User Display Burst means an important notification like a reminder or Cortana message. So it’s waking up for those on its own and then trying to go back to sleep, sometimes performing that cycle a few times in a row in quick succession. After the 506/507 sequences, there will be no other events listed until the regular startup events.

So there I go into the registry to disable connected sleep again and replace it with hibernation. Microsoft, why can’t you make a computer that can deal with power management? You get to write the BIOS, all the firmware, and the operating system!

I used to think the Linux community was just whining about how tough sleep (S3 in partic) are to deal with in the PC BIOS. Now that I’ve watched Microsoft flail and struggle with this for literally years on the Surface line, I get it. MS is having to sleep in the bed it has spent decades helping the PC industry make, so they get to learn all of the hard lessons HP, Dell, Lenovo, etc. have had to master on their own through huge engineering efforts to get their machines to reliably sleep with the nasty clutter of standards and BIOS support and OS hooks available. You were supposed to make this look easy, Microsoft!!

At least this problem is fixable, but it’s pretty messed up that so many people who have these machines are dealing with this and maybe have no idea how to work around it themselves.

Posted in Uncategorized | Leave a comment

Microsoft Surface Book Absolutely Sucks

I’m posting this from a Surface Book. I really want to love this machine, but it doesn’t like me. I can say without exaggeration that it has crashed hard more times since I bought it in Early Nov. ’15 than all of the previous machines I’ve own since 2000, and one of those was a hacked apart HP TC1100 2-in-1 tablet that I modified for passive cooling and to use a compact flash-based IDE drive instead of a mechanical one. It crashed a lot. That was nothing compared to Surface Book.

My state-of-the-art Surface Book came right out of the box in terrible shape. Within seconds of starting up it froze hard. In the first hour it had frozen or blue screened over and over at 5 or 10 minute intervals. I considered driving it back to the Microsoft store at the Mall of America where I’d obtained it and just waiting for v2.0, because clearly Microsoft didn’t get it. Luckily it stayed up long enough to survive a round of Windows updates and a firmware patch that appeared to stabilize it.

The I detached the tablet. It would only detach and reattach without crashing about 50% of the time until the next firmware update came out, so I was pretty careful with it until about late Dec. ’15.

Sleep was also unusable for the first four months I had it. The machine would only wake back up from sleep about 20% of the time. The rest of the time it would freeze or reboot. Needless to say I disabled sleep immediately and just used hibernate, which has only failed me on two occasions (within the last week though, unfortunately, so trending back into the bad direction). Every time Microsoft issued a firmware update I would edit the registry to turn connected sleep back on and try it out, and every time it would shit the bed, so until the massive intel/msft driver and firmware update in Feb. ’16 update where they promised sleep was fixed I left it disabled. To their credit it is actually fixed now, but yeah, it took at least 4 months, during which time the internet was abuzz with what a steaming pile of crap these machines are.

After 5 months of firmware updates the machine is finally working great except for one thing: The spendy docking station is absolute crap for multiple monitors. I play a constant game of switching back and forth between PC Screen Only and Extend on the Win-P screen to try to tease it to display on more than one monitor. I’ve read every top search entry that describes this problem. Soooo many suggestions. I’ve tried active cables, I’m using supported, paired HP monitors both at home and work, still it refuses to connect nearly 100% of the time:

  • If the machine wakes from sleep connected to the dock it often stops displaying on external monitors until you play the Win-P switching game.
  • If it’s been asleep for a long time and I connect to the dock, it often won’t acknowledge external monitors at all, as in, when I hit Win-P it doesn’t even flash like it’s trying to display, it’s like Windows forgot about monitor switching altogether.
    • Unplug either monitor from the dock and put it directly into the machine, BAM! it works instantly.
    • Plug it back into the dock after having used a monitor on the internal DP port and suddenly Windows and Dock remember what multiple monitors are again.
    • Hibernate and wake the machine when it’s in this state while connected to the dock and it wakes up displaying correctly on all three. WTF?

That last one has been my go-to move. If it doesn’t display immediately when I connect to the dock (i.e. 99+% of the time) I hibernate it, wait for it to be done, and wake it back up. So much easier than dealing with trying to Win-P back and forth until it works or play with cables, but a totally inconvenient and unintuitive way to be. I’m considering getting rid of the doc and using an old USB3 multi-monitor dock I have. Tragicomically, that old POS can power external monitors 100% of the time when attached to a USB3 port on the surface dock. WHAT IS GOING ON HERE?

It would be inconvenient to use two cords (the SB power cord and a USB cable), but if I could get my $200 back maybe I would feel better about having wasted so much time playing with the stupid surface dock.

Posted in Uncategorized | 1 Comment

Code Review vs. Paired Programming

This is a great post about substituting code reviews for paired programming from the perspective of a long time paired programmer. I did paired programming for about two years at various points in my career. I found it very grueling and much prefer code reviews, but I still pair up to work through hairy problems and do new concept training for hours or days at a time now and then.

An interesting notion from the article is that they found pull requests from dev to main branches were an ideal place to do the code review because GitHub could archive the conversation. IOW, they are essentially using it as a code review tool. Interestingly, all of the reasons they like it and all of the reasons they hate it are because it is NOT in fact a code review tool. E.g., the article noted that sometimes pull requests are enormous and seem to take forever to review. I would argue that using a dedicated code review tool would allow them to use pull requests for their intended purpose and have the conversation elsewhere (and get control over the increment size). They also talked about how code review (however you do it) was a great replacement for paired programming, even to the point of calling it asynchronous pairing.

This is a key insight! If you only use code review as a control flow structure to keep things out of QA until you’re ready, and not as a quality control and knowledge sharing loop, then you’re doing it wrong. They were clearly looking to replace the value they found in the knowledge sharing and quality control of paired programming, and they found it in the form of thorough code reviews.

Posted in Code Review, Uncategorized | Leave a comment

Dear Visual Studio Code Analysis / FxCopCmd.exe:

WHY DO YOU HATE ME WHEN I SHOW YOU NOTHING BUT LOVE!?

Sincerely,

Jesse

Seriously.

Visual Studio Code Analysis (aka prettified output from fxcopcmd.exe in a Visual Studio pane) hasn’t been worked on in like forever. And it shows. Why is it that when you configure Code Analysis just the way you want it in your projects and then do a build, Code Analysis decides it only wants to run for the projects with changes? Configurable? Not that I can tell. No problem, I’ll use the Build menu to run Code Analysis for the solution. HUGE FAIL! It looks like it runs it for all projects whether they’re properly configured or not. WTF?? If I do a clean/rebuild solution, now CA starts pumping out messages for the whole solution. There are 10 ways to kick off CA for the solution and I think this is the only one that works consistently.

FxCopCmd.exe has so many damned problems it’s not even funny. Want to run it on a build server? Good effing luck. Give up and install Visual Studio. Want it to honor suppressions for release builds? Don’t forget to pass /p:DefineConstants=”CODE_ANALYSIS” to msbuild when in release mode or you’re screwed. Isn’t that documented? Only if google’s index counts somehow as documentation. Why don’t you have to pass that *all* the time? Why isn’t it required for debug? WHO KNOWS!? It seems like some parts of code analysis instrumentation are integrated with the compiler or at least some magic step in the build process because this thing is a weird black box and you just have to fart around with it until it works.

Don’t even get me started about the stupid Code Analysis spelling dictionary. THERE ARE MAGICAL WORDS WHICH YOU CANNOT CORRECT NO MATTER HOW HARD YOU TRY. Just disable CA1704 or you will go insane. A developer in my group recently needed “checkbox” recognized as a valid compound. It’s the god damned EXAMPLE in the article on how to customize the CA dictionary and it doesn’t work. He just shut off CA1704 instead–most of the garbage that rule produces isn’t even worth suppressing.

What about when people configure fxcopcmd.exe to run in release mode as part of their build? Well, when codeanalysis msbuild tasks are run by Visual Studio, they magically always work great and never have any problems or errors. When you run them with msbuild like you would on a build server, they can sometimes create a command line with so many reference arguments that it’s too long for Windows to execute. WTF?? Now you have to pass /p:RunCodeAnalysis=false to msbuild and manually run fxcopcmd.exe as a separate, post-build step with the DLLs you want to analyze. F#$k you, fxcopcmd.exe. You could be so awesome and so helpful, but no, you force me to experience Stockholm syndrome rather that just being cool.

Posted in Uncategorized | Leave a comment

Lync and “This message is too long” errors

Have you ever sent a Lync message and received this error when you KNOW damn well it’s not too long?

stupidfuckinglync

It turns out Lync has two maximum message length policies: one for the first message in a conversation, and one for subsequent messages. Why? Nobody knows. Probably to slowly drive us mad. The “first message limit” is ~800 characters, while subsequent messages can be ~8,000. I know, right? Many systems (like Sharepoint) are capable of easily producing >800 character URLs for basic site features (like Sharepoint document history links), so I always have to start the conversion with a short pleasantry like “Yo” and then paste the giant link as a separate message or I get the angry “This message is too long. Please shorten your message and try sending again.” error. YOU’RE WELCOME!

Posted in Uncategorized | 1 Comment

If I can test my app entirely with integration tests, why do any unit testing?

Someone at work asked why they should write any unit tests if the whole app can be integration tested. Basically, the question was:

With complete integration testing, do unit tests add enough value to justify the additional time and added complexity? In this case adding an IoC would not be trivial given the limited support in the Katana middleware I’m using. And mocking the data layer is always such a drag, imo.

I replied as follows:

Great question! To buy some time I will start by quibbling with a couple of your premises:

1. Unit tests add complexity
I apologize if this comes off as smug or sanctimonious, but I’ll write it anyway with the promise that I’m trying to be helpful and not jerky: if you’re following SOLID, it shouldn’t add any complexity to add unit tests later…you should just be adding a bunch of unit test fixtures to your solution. I almost exclusively practice “test-after development”, so I can say from experience this is possible. I promise to anyone reading this that even if you skip the tests, your software quality and maintainability will higher if you write as SOLID-ly as possible at all times (even for JavaScript!).

2. Katana has limited support for IoC
If you’re hosting WebAPI on Katana it should be trivial to implement pure DI (hand-coded object graphs) or insert a DI container. Check this out.

3. Mocking the data layer is a drag
If this is routinely painful are you maybe unit testing parts that might lend themselves more naturally to integration tests? We add automated integration tests all the time for this purpose in [my department].

That didn’t buy me much time, so I’m going to say the answer to your overall question is “it depends”. I don’t see any problems with what you’re doing *in its current state*, but I do have some longer-term concerns about an all either-or strategy for unit/integration testing.

In either case (all unit tests or all integration tests), I strongly believe you should use appropriate code coverage tools and analyze your gaps to determine the best strategy to proceed.

In my experience if you have only integration tests you will eventually find gaps on coverage reports because chunks of code are either 1) completely inaccessible via the primary integration point (e.g. the UI or top-level API), or 2) require such complicated integration state transitions to reach as to be impractical to either write or run (i.e. costing large time multipliers just to reach a few more lines of code). That doesn’t mean you can’t or shouldn’t test that code, just that you might need more complicated tests than you originally thought, or that you might see an opportunity to add unit tests to part of the code base not well served by your integration suite.

As your application grows, it will also typically take longer and longer to execute a comprehensive integration test suite because you have to stand up infrastructure and plumbing to get them to execute, and typically end up adding more and more complicated state transitions and interactions as your features expand. This sometimes causes people to avoid running integration tests as frequently as unit tests, but this can be helped somewhat through technical means (using CI, gated checkins, etc.) or through process means (code reviews, manual coverage verification, etc.), so be sure to watch for less frequent use of the integration suite.

The opposite is mostly true for an all unit test suite: your test runs should be much shorter and should be easier for developers to run very frequently, but you’re going to see different gaps on your coverage reports, mostly at infrastructure interconnects like between the UI and business logic, between business logic and persistence, and between service tiers.

There is definitely a sweet spot for both, but my opinion is that software quality favors more unit tests as applications grow, particularly if you don’t have a dedicated QA resource with a vested interest in keeping up integration tests for you. This is not uncontroversial, so experienced people disagree about this all the time 😉 It has been my experience that large integration suites driven from the front end are fairly brittle. I’ve done front end-driven integration tests with dedicated tools, with assistive frameworks, and through hand-coding, for web, WinForms and WPF apps. The test churn can be overwhelming if you need to make a major front end change at any point, and at times this has caused me to either abandon integration tests altogether or trim them back to the most basic of automated smoke tests and add more unit tests.

Case in point, the gig I was at before my current one had me coding a multiplatform MS Word extension against a REST API written in Ruby. The API was being designed at the same time, so there was a lot of churn in general, but the Jenkins job for the Ruby app would frequently have 90% of the tests break at once, and it sometimes took a few hours just to get them all working again. It also took the integration suite >10 minutes to run, which was why the breaks would only be seen on CI–it was way too long to run for the developers to exercise the suite frequently on their own machines, so they just waited until Jenkins told them they broke something and then they’d react. Phooey.

The primary reason unit tests don’t result in that much churn is that they are oriented around individual implementations, whereas an integration surface like a front end UI or API tends to be a relatively monolithic structure which, although tending to change uniformly, also tends to fundamentally alter the application interaction when modified. Refactoring a couple of interfaces will result in a limited number of new coverage gaps or unit test breaks, but shouldn’t break anything not related to those interfaces. A small front end change can end up breaking every single integration test.

So because of performance and brittleness concerns, I tend to favor unit tests for overall code coverage, and integration tests only for specific application hotspots, like exercising the data access layer or covering areas with a history of regressions that are hard or impossible to unit test.

Posted in Uncategorized | Leave a comment