• 3 Posts
  • 31 Comments
Joined 8 months ago
cake
Cake day: February 26th, 2024

help-circle


  • Who’s sped up by your automated tests are your team members and you-in-three-months.

    Definitely true. I am very thankful when I fail a test and know I broke something and need to clean up after myself. Also very nice as insurance against our more “chaotic” developer(s).

    I’ve advocated for tests as a team effort. Problem is just that we don’t really have any technical leadership, just a hands-off EM and hands-off CTO. Best I get from them is “Yes, you should test your code.” …Doesn’t really help when some developers just aren’t interested in testing. I am warming another developer on my team up to testing, so at least I may get another developer or two on the testing kick for a bit.

    And as for management rating me… I don’t really worry too much. As I mentioned, hands off management. Heck, we didn’t even get performance reviews last year.



  • We haveused to have a scrum master so we’re already agile! /s

    They want those things, sure, but I think it would take multiple weeks of dedicated work for me to set up tests on our primary system that would cover much of anything. Big investment that might enable faster future development is what I find hard to sell. I am already seen as the “automated testing guy” on my (separate) project, and it doesn’t really look like I’m that much faster than anyone else.

    What I’ve been meaning to do is start underloading my own sprint items by a day or two and try to set up some test infrastructure in my spare Fridays to show some practical use. But boy is that a hard thing to actually hold myself to.


  • especially if the code has been written in a way that makes it difficult to write a robust test.

    I definitely deserve a lot of blame for designing my primary project in making hard to test. So, word to the wise (though it doesn’t take a genius to figure this out), don’t tell two fresh grads and a 1 YoE junior to “break the legacy app into microservices” with minimal oversight. If I did things again, I still think the only sane decision would be to cancel the project as soon as possible. x.x

    I actually was using a mock webserver with the expected request/response, which sounds like what you’re getting at. Still felt fiddly though and doesn’t solve the huge mock data problem which is more an architecture design failing.

    I’ve mostly gotten away from testing huge methods with a seemingly arbitrary numbers in favor of testing small methods with slightly less arbitrary numbers, which feels like a pretty big improvement.

    How are you gonna get good at it if you don’t do it! :D

    True. :)



  • The combination is bad.

    I’m not really sure what there is to do about that, then. My own project is already is about to hit 3 years on something that was intended to be <1 year total, due to constant scope creep. Nothing bad seems to ever come out of the delays though, so I tend to ignore most of the complaints.

    If you see it as an argument

    I don’t really see it as that. “Discussion” is more what I try to do. But you are correct that I don’t think I can argue on their terms.

    are you sure you understand what they value and prioritize

    Probably not exactly, but my point is that the priorities technical leadership says we value (quality, scalability, fast iterations), run counter to what we actually prioritize. I often ask why we prioritize Project X over Project Y and the answer is almost always a variation of:

    • “We can’t let IT be the reason the Project X is late.”
    • “The business thinks we’ve been working on Project X a long time (often not true) so we need to show progress.”
    • “Project X was promised for Release Z so it needs to get done over anything else.”

    Which is why I said our priorities are more about appearing busy and important than anything else. (My own project isn’t even wanted by most business users. It was spearheaded by the VP of IT as a huge technical modernization effort despite doing almost nothing to improve or get away from the legacy system it is “replacing”.) So I think the reason I have such trouble getting buy-in is that better testing runs counter to IT’s true priorities, even if it provides business value.

    [Trust] might be eroded down due to the consistent failure to meet estimates.

    Perhaps. But trust is already pretty darn low for that very reason.



  • Perhaps it’s just part of being somewhere where tech is seen as a cost center? Technical leadership loves to talk big about how we need to invest in our software and make it more scalable for future growth. But when push comes to shove, they simply say yes to nearly every business request, tell us to fix things later, and we end up making things less scalable and harder to test.

    It feels terrible and burns me out, but we never seem to seriously suffer for poor quality, so I thought this could be all in my head. I guess I’ve just been gaslit by my EM into thinking this lack of testing is a common occurrence.

    (A programming lemmy may not be a terribly representative sample, but I don’t see anyone here anywhere close to as wild west as my place.)



  • We’ve definitely written lots of tests that felt like net negative, and I think that’s part of what burned some devs out on testing. When I joined, the few tests we had were “read a huge JSON file, run it through everything, and assert seemingly random numbers match.” Not random, but the logic was so complex that the only sane way to update the tests when code changed was to rerun and copy the new output. (I suppose this is pretty similar to approval testing, which I do find useful for code areas that shouldn’t change frequently.)

    Similar issue with integration tests mocking huge network requests. Either you assert the request body matches an expected one, and need to update that whenever the signature changes (fairly common). Or you ignore the body, but that feels much less useful of a test.

    I agree unit tests are hard to mess up, which is why I mostly gravitate to them. And TDD is fun when I actually do it properly.


  • This is a bigger problem than tests.

    You mean things going over estimates or SM/EM complaining about it?

    You’re presenting a solution for a problem that the team either does not see as important or doesn’t think exists at all.

    Definitely it’s a known issue, and I think people think it’s semi-important. Feels like every other standup has a spiel from the EM about “we need to test things, stop breaking things, etc.”.

    Whenever I argue on their terms though, I quickly “lose”, because business terms seem to be, “agree to everything from the business, look busy, and we will have time for IT concerns (i.e., testing) when we are done with business projects for the year (i.e., never).”

    If I want any meaningful change, I think it will need to be be something I work around management on.



  • Nah, red flag is certainly accurate in my case.

    We really don’t have a strong hierarchy of engineering leaders. Devs are all pretty much equal. EM is extremely hands-off, but also prefers to hire inexperienced developers to “train them up” (which seem like contradictory ideas…). So we we have a very free-for-all development process after work is assigned. And of course very few (zero?) devs really want to start doubling estimates for quality that no one seems to care strongly about.

    (The saving grace here, if you can call it that, is that it’s very easy to go around leadership and do whatever you want with the dev process, so long as you can do it yourself. So perhaps what I should do is add stricter code coverage checks on the services primarily worked by me as a way to protect me from myself, and maybe convince some others to join in.)


  • Ehh to be fair, none of the code with coverage is in use by anyone. It’s a constantly delayed project that I kind of doubt will last more than a few months in production if it ever gets there. The primary app has no tests, and the structure probably would require dedicated effort to make testable. Most logic goes through this sort of “god object” that couples huge models very tightly with the database. It’s probably something that can be worked around in a week or so, but I never spend much time on that project.

    I’m not sure if I want to be that guy though, slowing everyone down when scrum master and managers are already constantly complaining about everything going over estimates. (Even if poor testing is part of the problem…) I could maybe get a couple devs to buy in on requiring tests on new code, definitely not QA or my EM though. Last time tried to grandstand over testing, I got “XYZ needs this ready now, I’ll create a story for next sprint to write tests.” … 4+ sprints ago, and still sitting there. I just don’t really know how to advocate for this without looking like an annoying asshole, after trying for so long.


  • This is more or less the thoughts I typically hear online, and all makes sense. What I tend to notice interviewing people from big(ger) companies than mine (mostly banks), it sounds like testing for them is mostly about hitting some minimum coverage number on the CI/CD. Probably still has big benefits but it doesn’t seem super thoughtful? Or is testing just so important that even testing on autopilot has decent value?

    I get that same feeling with frontend testing. Unit testing makes sense to me. Integration testing makes sense but I find it hard to do in the time I have. But frontend testing is very daunting. Now I will only test our data models we keep in the frontend, if I test anything frontend.


  • Was there any event that prompted more investment into testing? I feel like something catastrophic would need to happen before anyone would consider serious testing investment. In the past (before I joined) there were apparently people who tried to get Selenium suites but nothing ever stuck.

    I think nobody sees value in improving something that is more or less “good enough” for so long. In our legacy software, most major development is copy+paste and change things, which I guess reduces the chance of regressions (at the cost of making big changes much, much slower). I think we have close to 100 4k line java files copied from the same original, plus another 20-30 scripts and configs for each…

    We are doing a “microservices rewrite” that interfaces with the legacy app (which feels like a death march project by now), and I think it inherited much of the testing difficulties of the old system, in part due to my inexperience when we started. Less code duplication, but now lots of enormous JSONs being thrown all over the network.

    I agree that manual testing is not enough, but I can’t seem to get much agreement. I think I do get value when I write unit tests, but I feel like I can’t point to concrete value because there’s not an obvious metric I’m gaining. I like that when I test code, I know that nobody will revert or break that area (unless they remove the tests, I suppose), but our coverage is low enough that I don’t trust them to mean the system actually works.