A Route from Żmigród to Wrocław

    What I liked about this route

    I loved the wilderness. The effortless way of hopping onto the route. And its extermally easy to navigate paths, even without maps or apps. The first part is immersed in nature, or at least it is separated from (calm) asphalt roads.

    Image from Żmigród surroundings 1

    Image from Żmigród surroundings 2

    What I didn’t like

    However, what I didn’t like about the route was the finish. The last 12-15 kilometers. I know them well. A separated bike line in Wrocław was placed alongside a very busy sub-urban road. It was quite occupied by cars, noisy, and rather polluted.

    How to get to Żmigród

    I got to Żmigród from Wrocław from the Main Railway station. The ride with an Intercity train took about 25 minutes. Upon arrival, you are greeted with really nice bike lines, mixed with some local roads. It’s effortless to get from the railway station in Żmigród onto the trail.

    Bike lines near Żmigród

    Towards Milicz

    Then I took a red train starting near the Palace. I was totally surprised by the beautifulness of the surroundings. The landscape was perfect. You are surrounded by a riverbank, fields, and a lot of birds flying around you, trying to figure out what the heck are you doing here. Perfection.

    Nature towards Milicz

    Prusice

    It’s June when I’m writing this, so the fields were blooming with crops and flowers. It was really colorful and intense experience, both for the eyes and for the nose, but in a good way. Next, I took a right turn leading to town Prusice. The city is one of the lowest average monthly income cities per habitant in Poland. I was supprised how nice the main square was. Not supprising to someone who visited cities around Wrocław, but still nice one.

    Fields near Prusice 1

    Fields near Prusice 2

    Prusice to Oborniki Śląskie

    That’s the part where I had to share space with cars for a couple of kilometers. I think the part leading to Oborniki Śląskie could be quite challenging for average bikers. The ascent is really long, I would say. It’s not a hardcore one – I wasn’t exhausted after it, as it’s just 200 meters of ascent in the end. But for average biker could be challened with it. I would suggest taking a break in Oborniki Śląskie. Check out DeguStacja restaurant, vegan options are available.

    Ascent towards Oborniki Śląskie

    However, climbing really pays off. You will get a pleasure of having a delightful, extremely engaging downhill ride to Oborniki Śląskie.

    View after ascent

    Descent to Wrocław

    Then from Oborniki Śląskie, you have a delicate downhill, a delicate descent towards Wrocław, mixed with some gravel riding, which makes it a really, really nice outro.

    Gravel route to Wrocław

    The Finish Line

    As I mentioned, the last part of the route is made of a mix of bike lines, pedestrian paths and crosswalks alongside a very busy road connecting the city center with the suburbs. The greenery is there, obviously, but as I said, it’s rather noisy.

    Busy road near Wrocław

    Comparing the end of the route with the beginning - those are two different worlds. The first world is about calmness, nature, and chill, and the second one is about traffic and busyness. But the route ends at home. Have a great riding!


    If you’re an infrequent writer, as I am, and wondering if you should experiment with Medium’s Partner Program, I would suggest to read my post. I have been using medium quite extensively, on a daily basis. Mostly to keep up with the industry, expand my point of view and make myself out of the information bubble. On the other hand, seeing how “popular” my first blog post was I decided to go with the flow and join partner program, and see what earnings I can get.

    Numbers! Numbers! Facts!

    Blog Post Views Reads Earnings
    Take your fucking notes in the era of AI. 32 57 $0.09
    Android 15 enhanced App Links is an opportunity 177 65 $0.30
    How do I feel like an ADHD-powered Software Engineer? 3 0 $0.00
    Mobile DevOps Blueprint: Android UI testing on Bitrise 3,600 1,970 $0.11
    App Links: Multiple Android Build Types, Single Domain 14,500 10,000 $0.29
    You shouldn’t take AWS Pearson Vue Online Proctoring 3,800 1,300 $8.01
    Be effective with Bitrise CI for Android—lessons learned 14,800 6,700 $8.09
    How to start writing reusable components for Android apps? 14,800 6,700 $37.30
    Total 51,712 26,792 $54.19

    The final verdict

    As you can see, I reached almost 52 thousand views and 26 thousand reads, hundreds of claps (likes) and dozens of followers. With all that I was able to earn 54 bucks. Not bad - but not great. Medium subscription costs 60 bucks per year. You have to have 50 followers and a few stories to join the program. Being casual writer does not pay for the subscription. I obviously know I could generate tons of content from my notes and produce tons of mediocre posts, but it’s not in my DNA. Let’s have all that content here, for free. See ya around!


    Take your fucking notes in the era of AI.

    And poke with “AI” but does not assume it’s gonna do the job if you can’t.

    Most of us, young folks, are aware that in the era of so-called-AI or so-called-Gen-AI everything needs to be automated, are we? Is that what we want? I doubt that, highly.

    I’m writing this from a position of a Software Engineer who, ugh, want AI to success. I want us to do crazy, creative stuff with less overhead. No, I don’t mean stakeholder value, I mean we need to grow as conscious beings and finding new ways, solving challenges and making the world a better place. #doNotYellCommieAtMe

    Anyway, if we’re aware of the fact that AI does not solve complex challenges, nor it’s going to soon enough. (or not and I will need to change my mind 🤷‍♂) Why are we investing more and more time and money into “AI-based tooling”? I’m asking myself. I’m doing it for at least two reasons. The first one is that I’m interested in technology in general and its impact on the society. I also want boring tasks to be done quicker so I can do more creative and fulfilling work — or just spend more time with my loved ones. The second reason, on the other hand, is quite prosaic. I fear that I won’t stand out anymore. I won’t be relevant in the era of aggressive optimisation and I feel we all feel that recently. The fear of being left behind. And the fear is cured with action, by embracing my tech-bros-go-to-solution to everything = AI. And I’m guilty as charged! Currently surfing on Windsurf, ChatGPT and Copilot. L O L

    Why!?

    Back to the topic. If you realised that people around you stoped taking notes during meetings… ask them why? Is that because automatic AI-based transcription is running in the background? Ask them this: What have you been noting so far, mate? What people said? What’s the role of taking notes? I stopped taking notes for a brief moment. Now, I’m back to it. I realised this: Taking notes was never about what people have said.

    Taking notes it’s about building a mental model of the challenge and the questions the team is facing. Pinpointing crucial knowledge and creating wisdom from it. It’s about uncovering insights and directions by asking the right questions, connecting dots, nailing down assumptions and making space for thoughts and ideas that are important but does not necessary have to be addressed here and now. I cannot really express how this is different to the AI based transcription happening in the background. So, please, do take your fucking notes. #okBoomer Draw manually (on screen too!) and ideate like a human because that will always be a worthy, interesting and fulfilling task to do.

    And try out AI tools from time to time. They might help with some of your challenges.


    In the latest Android 15 update, developers are greeted with enhanced control over App Links. It’s introducing a level of flexibility and precision that was previously unattainable. Let’s talk quickly about that. For the example and summary please jump at the end of the article.

    Software Engineers now have the power to exclude or allow app links. It’s something new, delivering fine grained experience - as you can imagine disabling/enabling an entire group of app links or a very specific scenario is now available to us.

    On top of the above, beyond the traditional <data> attributes, Android 15 extends its functionality to allow filtering based on:

    • a query parameter and the corresponding value,
    • fragment,
    • and path prefix;

    The granular approach Google introduced to app link management ensures that a user is taken to the appropriate resources in your application. Mixing that with accurate data being passed as query params, in example, a user can see improvements from enhanced filtering and smoother experience, to less bandwidth heavy and more narrowed API calls! As long as your code can handle it.

    Embracing these updates, developers can craft more refined and targeted app experiences, while users enjoy more customised interaction with their apps.

    No more talking! Lets dive into some details and examples.

    Review the new stuff. The work is mine. Review the new stuff. The work is mine.

    Having in mind the fact, that nothing is perfect, including the schema and the designs on URL/URI in every company, the new App Links capabilities could save us from impossible scenarios.

    I will omit the path prefix examples, since it’s quite obvious, and on top of that, we all had pathPattern already defined in the <data> attributes. Please see right hand side of the above image.

    I would argue that a mix of allow/block lists and the query parameter filter group is a simple, yet powerful feature delivered to our hands. I myself, benefited from in the following scenario.

    In the company I worked for, we had a request to reroute mobile web users to the native application only and only when a particular query parameter with a particular value is present and when another query parameter with any value is present as well. It would enable us to personalise the experience for a given user, I cannot really tell the details now.

    Although, the challange was… the domain, host, schema and path was always the same. We obviously wasn’t able to change schema nor even path, since it would influence the entire business (web+native). Also, those links were the most common URLs (schema) used by the company. We weren’t really ready nor happy to handle all of the variations. We didn’t want to provide a generic experience, like, you’re being thrown into a main screen in 90% of times — since mobile web experience was very good. Native mobile experience was obviously excellent but still narrowed to a few features.

    I want you to imagine how could you improve your user experience and business metric as well. If you had a hard time to get clues from the above article here is a short list of examples and ideas you could start implementing with fine-grained app link filters:

    • You can lower the bandwidth used — the amount of data being transferred via API calls, by both getting precise data from app links and by using those in API calls.
    • You can easily filter out the content you do not need — and narrow it to the stuff your user is really looking for.
    • You can be more specific around business metrics and analytics and how web and native apps work together.
    • You can prevent to fallback to native apps whenever they aren’t ready while attaining nice app links coverage where needed.
    • You can workaround/hackaround bad URLs schema design in your company and get what you really need for our app/user.

    Thanks for reading!


    How do I feel like an ADHD-powered Software Engineer?

    It’s like being a chipset with one processor and one virtual thread. However, you’re forced to run software that requires sophisticated parallel processing. You know you could do better sometimes but you can’t. You day dream about you wasted potential.

    It’s like being sure you have a lot of cache memory available but the miss cache event happens too often since your algorithms are quite fucked up. There are heuristics, but they are not always the best. They are better then most, but not always and in random moments.

    You know you have a massive L1 and L2 cache size. Your processor runs more efficiently than most people’s processors but just for the shortest period. This leads to overheating quickly.

    On the other hand, the L3 cache is not even considered as the source of commands. So, in the end, you forget to do basic stuff. For example, turn on the cooling system (do some rest and emotional reset). Leading to being nearly dead at the end of the day (from overheating) since the deadline triggers alarms instead of planning for the maintenance.

    It’s like you know your memory drive cannot store commands (ideas, plans) so you have to process them NOW or NEVER. Then it’s gone.

    You’re aware that you cannot process boring stuff for too long. The system would eventually shut down since “boring” stuff makes gooooo “blue screen of death”.

    It makes you stuck on tasks that should take seconds for days. And you need someone to help you reset. You need manual, external intervention, at least you feel that way.

    Have you tried to turn it off and on again?

    You are supposed to!

    But do not worry. Attention distraction disorder is a mixture of super-powers which make you an interesting person. But yeah, it’s a super-burden as well. We’re not just happy people with lots of energy.

    Be aware of yourself. Be kind. Cut off daily information ingestion. Take care of yourself. Take breaks. Change your habits and follow your dreams when you can. Take a rest, when you need it.
    The world will wait. Even if the entire world is screaming the opposite.

    PS

    This is my own, a bit satirical opinion on ADHD/ADD. Please do not treat it as guidance, because I’m not an expert and we are all different.


    Mobile DevOps Blueprint: Jumpstart into Android UI testing workflow on Bitrise CI

    Just run those tests in minutes

    The Challange

    I want to run UI tests on Bitrise CI asap. I have a sample test suite or I’m migrating from another CI. Perhaps I want to customize the workflow a bit to run custom annotation that scopes the test suite or sets device config, etc.

    Who would benefit?

    DevOps, Automation Engineer, Android Dev, Tech Lead.

    Assumptions

    I’m assuming you’re a beginner to intermediate Bitrise user, or you’re a skilled professional looking for a shortcut to just run your test suite asap. What I’m providing here is supposed to be a blueprint for you to jumpstart running your tests on Bitrise CI.

    This article will cover the Mobile DevOps side of how to kick off UI testing on Bitrise for Android. Code included. I’ll also share a couple of good practices for UI automation configuration.

    I’m assuming you want a quasi-infrastructure-as-a-code approach, meaning you are going to use bitrise.yaml instead of Bitrise web console. In fact, please use bitrise.yaml + version control system to maintain control over changes and learn more.

    I’m assuming you have configured the auth mechanism between your repository and Bitrise, meaning you can clone your repo while running Bitrise builds.

    I’m assuming you know where to find the most recent bitrise.yaml version. BTW See Workflow Editor -> bitrise.yaml tab and download it.

    The Firebase Device Lab will be used to run the automation suite. The default account is linked with your Bitrise instance — it’s recommended to provide your own account.

    If you would like to learn in-depth Bitrise concepts and how to navigate and use it in an optimized way, I would recommend you read my article on being effective with Bitrise CI, available on this site and under the link on medium.com. These two articles will share a few tips for sure.

    The Blueprint

      runTests:
        steps:
          - git-clone: { }
          - cache-pull@2: { }
          - install-missing-android-tools@3: { }
          - android-sdk-update@1: { }
          - android-build-for-ui-testing@0:
              inputs:
                - variant: debug
                - module: app
          - virtual-device-testing-for-android@1:
              is_skippable: true
              inputs:
                - test_type: instrumentation
                - test_devices: "Pixel2,28,en_US,portrait"
                - inst_test_targets: "annotation com.example.annotation.SMOKE"
                - inst_use_orchestrator: 'true'
                - test_timeout: 3600
          - custom-test-results-export@0.1:
              inputs:
                - test_name: "*"
                - search_pattern: "*"
          - cache-push@2: { }
    

    Just basic configs, nothing to see here, really. runTest steps live alongside other steps.

      runTests: #name of the workflow
        steps:
          - git-clone: { } # clones repo, more config may be required, out of scope
          - cache-pull@2: { } # gets prev cached artifacts, faster build
          - install-missing-android-tools@3: { } # update
          - android-sdk-update@1: { } # update
          # this is where testing happens
          - custom-test-results-export@0.1: # Test reports. You want them.
              inputs:
                - test_name: "*"
                - search_pattern: "*"
          - cache-push@2: { } # push artifacts
    # Optionally you may like Slack notification
    

    As said before, no magic happens here. Just housekeeping like cloning the repository and being sure things are up to date. Keep an eye on reports - those are useful to track what’s exactly wrong with your test including screenshots, logs, state, execution time, etc.

    Ok, let’s explain the UI tests running part.

    runTests:
        steps:
          # Test builds
          - android-build-for-ui-testing@0:
              inputs:
                - variant: debug
                - module: app
          # Testing
          - virtual-device-testing-for-android@1:
              is_skippable: true
              inputs:
                - test_type: instrumentation
                - test_devices: "Pixel2,28,en_US,portrait"
                - inst_test_targets: "annotation com.example.annotation.SMOKE"
                - inst_use_orchestrator: 'true'
                - test_timeout: 3600
    

    How test builds are being prepared?

    # Test builds creation
          - android-build-for-ui-testing@0:
              inputs:
                - variant: debug
                - module: app
    

    No magic here again. Both your APK and testing APK will be built as a part of these steps. Bitrise will figure out the paths to APKs itself. If you want to see the guide — here is the official one.

    The UI testing steps

    - virtual-device-testing-for-android@1:
              is_skippable: true
              inputs:
                - test_type: instrumentation
                - test_devices: "Pixel2,28,en_US,portrait" #example
                - inst_test_targets: "annotation com.example.annotation.SMOKE" #OPTIONAL
                - inst_use_orchestrator: 'true'
                - test_timeout: 900
    

    Step by step

    is_skippable: true # not required
    

    You may want to have a green build even though the UI test failed because some of the tests are flaky and brittle. Maybe you just care about configuration (for now) or maybe you’re running an experiment. It’s useful. Use it whenever necessary.

    - test_type: instrumentation # required, other: game loop, robo
    

    In most cases, Android UI tests are instrumentation tests. Ask Automation Engineer if that’s the truth for you.

    # required, schema deviceID,version,language,orientation
    - test_devices: "Pixel2,28,en_US,portrait"
    

    You have to provide at least one device, you can provide many. Separated by comma.

    You can find config details and a list of devices on the official Bitrise Github step codebase.

    - inst_test_targets: "annotation com.example.annotation.SMOKE" # optional
    

    You can define test targets — if not provided, every test in androidTest package will be triggered. You’re required to provide a fully qualified class or a package.

    You can also configure your test targets via annotations. Annotations give you the opportunity to scope out what test to run by grouping them. It’s up to you what, how, and when to group which gives you the opportunity to adapt to the current context (release, feature branch or smoke testing?) — it’s all up to you and can be done both manually and dynamically. A short and sweet article on annotation is found here.

    - inst_use_orchestrator: 'true' # optional, default false
    

    This listing defines if tests are using Android Test Orchestrator or not. Default: false. Ask your Automation Engineer.

    - test_timeout: 900 # optional, default 900=15m
    

    A question: How much time must pass before a test is canceled? Default 900 = 15m. I would assume it should be much quicker for an efficient automation suite.

    But there is obviously much more, like

    - num_flaky_test_attempts: 0
    

    And more, and more, and more.

    You can compare what I have shown you with the official documentation on Github. In this case for the UI testing step. It’s way better than website documentation.

    We’re finally at the end. Now, you can use provided a blueprint and run your test suite in minutes via Bitrise. Happy coding!

    I hope you like this piece I love a fast feedback loop. If you have any objections, comments, or questions — please drop a comment or DM message.


    Although I won’t go into the configuration nor benefits of Android App Links, you can find them here, here, and here, I would like to elaborate on the missing documentation bit.

    Specifically, how to associate and verify ownership between your domain (a website) and multiple build types of the same Android application using a Digital Asset Links JSON file. Simply put, how to get rid of disambiguous dialog in such an environment.

    We do not want that, ever! We do not want that, ever! Source: flickr Licence CC.

    While it’s not groundbreaking research, this guide can spare you a couple of hours or even days of waiting impatiently to see if ownership over an app has been verified.

    In most cases, you’re not able to deploy a Digital Asset Links JSON file on your own, thus you’re dependent on a third party to do so. It means waiting. It’s better to Do it once, properly.

    App links (deep links) are usually not a standalone functionality of an app yet rather a collaboration between many teams that drives users into an app (for example, marketing campaigns).

    More precisely app links usually require a bit of collaboration from a mobile team and marketing/DevOps/web/backend/you-name-it team from your company.

    Let’s give an answer by starting with a question.

    Given the fact that an app has multiple build types, each having its own, distinct certificate fingerprint SHA-1

    When a subset of build types points into the same domain

    Then could ownership be verified for all build types using a single domain?

    This is the “complexity” we’re facing. This is the “complexity” we’re facing. Source: DIY.

    Let’s give an example since it might be confusing.

    An app has DEV (debug) and QA/UAT/SIT (debug and release) build type configurations that are both pointing to the same domain (website), say, uat.example.com. That means you have two “apps” with two distinct signing key configs: two distinct SHA-1 certificate fingerprints are being used to validate ownership of App Links via a single domain that hosts the assetlinks.json file.

    If you have a single domain, can you verify ownership of two apps using two distinct certificates?

    Yes, you can, it’s documented.

    Our case is a bit different. What if an app also has links configured for uat.mobile.example.com subdomain and both debug and release build types of the same app are pointed into it? Could I verify ownership upon a second subdomain using both debug and release certificates again?

    The answer is: Yes, it can be easily achieved.

    This is an assetlinks.json content you have to upload to your subdomains.

    [
      {
        "relation": ["delegate_permission/common.handle_all_urls"],
        "target": {
          "namespace": "android_app",
          "package_name": "com.example.app.buildType1",
          "sha256_cert_fingerprints": ["debugCertificateFingerprintHere"]
        }
      },
      {
        "relation": ["delegate_permission/common.handle_all_urls"],
        "target": {
          "namespace": "android_app",
          "package_name": "com.example.app.buildType2",
          "sha256_cert_fingerprints": ["releaseCertificateFingerprintHere"]
        }
      },
      {
        "relation": ["delegate_permission/common.handle_all_urls"],
        "target": {
          "namespace": "android_app",
          "package_name": "com.example.app.buildType3",
          "sha256_cert_fingerprints": ["anyOtherFancyCertificateFingerprintHere"]
        }
      }
    ]
    

    In the example above, the Digital Asset Links JSON file is configured to support a particular build type

    "package_name": "com.example.app.buildType1"
    

    which should be verified using debug SHA-1 certificate

    "sha256_cert_fingerprints": ["debugCertificateFingerprintHere"]
    

    while the second build type has a “release” nature and uses a release SHA-1 certificate for ownership to be verified

    "package_name": "com.example.app.buildType2"
    
    "sha256_cert_fingerprints": ["releaseCertificateFingerprintHere"]
    

    Lemme explain a bit. The Digital Asset Links JSON file is a plain JSON file.

    It contains a list (table) of elements as a top-level element.

    All you have to do is to provide an entry for each of your build types with an appropriate SHA-1 fingerprint. And deploy it into each domain you’re interested in. Voila!

    This configuration is extremely useful if you love the flexibility and the opportunity of using any number of environments with any of your build types. It enhances the testability of an app and increases data diversity while easing debugging— all because you can choose which environment will be used while implementing and testing your application.

    Yeah, sharing is caring, remember that! Yeah, sharing is caring, remember that!

    Thank you, Wojtek for the peer review. It’s delightful to have such support. :)


    You shouldn’t take AWS Pearson Vue Online Proctoring exam and here is why

    Today, I took AWS Architect Associate exam. I’m already a holder of a AWS Developer Associate certification so I expected no issues. I couldn’t be more wrong!

    I will tell you what to expect, what the policies are, what surprised me and why you should avoid online proctoring exam, if you have alternatives.

    What you will be obliged to do before even starting an exam?

    1. Be sure to run system test. You should do it in the same room, with the same devices and WiFi that you will use during an exam. I have couple of WiFi networks which can be switched dynamically so I checked all of the options beforehand. Think what could go wrong because something most likely will.
    2. Read the online exam policy, that’s the best one thing you can do. And don’t be scared of it at all. I won’t elaborate on it. I will describe what wasn’t there later in this article so you will be well-prepared!
    3. Disconnect all external devices from your PC/Mac/laptop. Remove them from the desk. That includes external monitor if you’re using MacBook or Windows powered laptop. Any Linux powered devices are strictly prohibited. I ended up disconnecting and removing my monitor 10 minutes before an exam! My fault.
    4. You have to be in a closed, well-lit and calm environment. They need to see closed doors as a part of “room verification” process. No one can enter, no one can talk, remember!

    What was surprising since it wasn’t written in the policy?

    The recommendation is to start 30 minutes (!) before exam scheduled time. But it isn’t written why. It’s because entire verification process takes eons. In my case it took 1h. Sitting and waiting was the major part of it. Although I saw it could be worst than that. One must be ready to spend 140 minutes of exam alone. Additionally, there will be couple of minutes on pre- and post-exam activities. Finally, you would have to spend from (promised) 15 minutes to 1h (or more as few people shared) on verification process. That adds up to almost 4h spent in one position in front of your computer. Remember, while verification process and exam is happening you cannot stand up, you cannot go to bathroom, you cannot drink nor eat, you cannot talk and you cannot look into window. Talk to your partner and/or roomies and plan for it. You have to make your room a Temple Of Silence for a time being.

    As I mentioned, they will verify your identity. That’s understandable and would also happen on-site. The verification process can be done via your smartphone, for example. Also, a proctor might try resolve technical issues with an exam by calling your phone. But, according to policy, you’re not allowed to have phone within your reach. Nor any smart devices. If someone would call you — exam could be cancelled. I recommend having your phone on silent mode, at least 1,5h meters away but with visible screen — so you know when agent is calling (strange, unknown and international number) and you won’t end up with cancelled exam because you picked up a call from your boss/partner/parent/friend.

    Policy also states that you need government issued ID card in order to identify yourself before an exam. During the verification process I was able to choose from driver license or passport only. ID card is not an option and that was a big surprise to me. I had to find one of the approved documents very quickly. Don’t do it to yourself and prepare ID card and the second document beforehand, just to be sure.

    So, what really happened and why I’m simply disappointed?

    As a part of verification process I was contacted by a polite agent. She checked my workspace using my camera view, so, what’s behind and in front of my desk, my hands and wrists for watches and cheat sheets. All good! Good to go!

    Everything was fine until she disconnected from the chat and remotely triggered an exam. My entire screen went black instantly. The only thing I was able to see was an overflow menu with the following items: Pearson logo, chat, whiteboard and my camera view. I waited around 10 minutes hoping for exam to be loaded. Then, I clicked ‘chat’ icon — instantly — entire application froze. I waited for some time hoping it will ‘unfroze’, unfortunately, it have not happened. I had to force restart entire device.

    I’m running MacBook Pro with quad-core CPU and 16GB of RAM. You can be sure it’s enough. I’m running Android Studio, Docker and much more on a daily basis. Additionally, I was at 300Mbs fiber backed internet connection. So, I kinda blame Pearson software.

    How to resolve broken exam situation?

    You need to go to AWS dedicated Pearson help center. Since my country don’t have dedicated Pearson phone number I spent more than 1 hour waiting for online assistant. It took around 15 minutes to open a case. Now, I need to wait from 3 to 5 business days to resolve the situation. The entire process is “very corporate” if you understand what I mean. It’s slow.

    Take exam on site!

    Take the above mentioned points into consideration while scheduling an exam. It’s all understandable but it’s quite limiting comparing to taking an exam on site. On site you can go to bathroom, drink, you will have a piece of paper to work on, you can relax a bit. If you’re able to schedule on site I recommend it. I won’t do online exam again unless #covid19 will force me to.

    Obviously, shit happens. I’m not furious nor mad but I want to warn you what may happen, how reality looks like, so you’re not disappointed. Every student spends a lot of time beforehand in order to pass an exam — it’s a bit frustrating when technical issues gets in the way. I understand software sometimes is not functioning as expected as I’m also introducing issues from time to time. All software engineers do. Simply, if I would have a choice between on site stable exam environment and unstable, unpredictable software — my choice is obvious.

    What was your experience with online proctor exam? Let others know in the comments!


    Be effective with Bitrise CI for Android — the lessons I learned the hard way.

    Bitrise logo

    I won’t elaborate here on how important and crucial for any software development-oriented team the continuous integration (CI) practise is.
    I’m pretty sure we can all agree on how CI tools support our day to day effectiveness. How they might save dozens of hours spent on non-essential tasks. Yet, it’s common to present CI tools as a hassle; slow, bulky,
    and unreliable pipelines bloated with chaotic events instead of fast, maintainable feedback loop configured to support both product quality
    and team flexibility.

    As the title implies, our CI process was far from optimal. We learned what “slow and chaotic” means the hard way. Below, you will find an overview
    of each issue that slowed us down, with full explanation of what the solution was (including code and external links), as well as honest results measured by minutes.

    In this article, you will find discussion surrounding architecture, flavour agnostic unit testing, Gradle usage as well as keeping your logs and artefacts deployment in order. Additionally, at the end of the article, several tips
    and tricks beyond optimisation will be included.
    It’s not a step-by-step tutorial. We gathered results that work for us,
    and you have to think them through. If those solutions make sense to you then, and only then, apply them to your environment.

    The landscape

    In order to fully understand why we provided a particular optimisation
    it is crucial to understand how our landscape looked at the time.

    There is git flow approach in place, which usually means multiple feature branches exist at the same time in a remote repository. There is at least one pull request per story. Each pull request needs to go through an integration process meaning the newest commit in a pull request triggers a fresh CI build. That’s being done in order to ensure the newest change won’t introduce any flaws. Yep, automation and unit test suites test each software incrementation. Software Engineers in Test (SET) writes automation tests
    as “a part of“ the feature in some cases.

    We are supporting multiple modules as a part of our architecture.
    Let’s assume it is a clean-ish architecture with domain, data and app layers packed into separate modules. Each of the modules has its own unit tests suite — between dozens to few hundreds of them per module. We have to support multiple flavours and they differ greatly. Each flavour has
    a separate set of automation and unit test, although most of them are shared.

    When it comes to infrastructure, there is a separate Bitrise workflow for every build type. Also, a separate one for each of: feature development,
    automation efforts, release (tags) activities and after merging feature
    to the develop. Seeing how many distinct configs we have, there is a need
    to run multiple builds every day. We can’t and won’t have “infinite” amount
    of concurrent jobs, so time devoted to each build is very important to us.
    It’s also important because we value sh*t done the right way.

    The basic measurement that will prove effectiveness here is build time — both entire build time or a particular step time (such as unit tests step or deploy step).

    Improvements

    Unit testing

    The most commonly used feedback loop is unit tests suite, in particular
    if you’re supporting multiple flavours for Android app and you want to be sure that none of the changes would break any of the flavours.
    Unit tests are supposed to be a fast and reliable feedback loop, which can be automated at the CI level. So, we used docs and tutorials to set them up for
    all of the flavours. After few changes to CI, we ended up with 30 minutes long unit test step for 3 flavours.
    Yes, you read it properly: 30 minutes for 3 flavours.

    Ok, let’s fix that.

    After a little bit of research it occurred to us that we used two separate steps for unit tests. Android unit test step for Bitrise was running app module unit tests. Gradle Unit Test step was just running .gradlew test task.

    Total time for each step in minutes. Total time for each step in minutes.

    What’s wrong with gradle unit test step in our case? According to the Gradle documentation:

    Gradle CLI documentation screenshot Source: Gradle documentation

    In simple terms, ./gradlew test triggers dozens of different test tasks from every module. In our case, it triggered both debug and release related tests
    for every subproject (module). That’s too much redundancy; consider the final result of ./gradlew test command:

    (amount of flavours) x (amount of supported envs) x (amount of modules)

    But the amount of tasks triggered is not all we can improve here.
    I already mentioned we have several modules. Since it’s cleanish architecture, it consists of app, domain, data and api modules.
    It’s easy to see that some of those modules are flavour agnostic — domain, data and api layers can and should be treated as libraries. Those are external dependencies that could be used via any JVM compatible code.
    Do we need to run those tests separately for each flavour?
    Of course we don’t! Where does it lead us?

    Flavour agnostic unit tests

    Split flavour dependent and flavour agnostic unit tests. Gain greater control over how your application is tested. Use Gradle Unit Test step in your Bitrise.yml to run targeted flavour agnostic unit tests, like this:

    My gist lets you configure module dependent unit testing. Source: this gist

    Using unit_test_task attribute enables you to configure a particular task to be run. Basically, any gradle task. You can obviously chain gradle commands, but I want granularity here. Additionally, the usage of title attribute keeps build logs in order and enables you to track each step separately.

    The result of applying the above mentioned recommendations. The result of applying the above mentioned recommendations. Cleaning up resources gave us unit tests result in seconds instead of minutes.

    Flavour dependent unit tests

    The second recommendation relates to Android unit test step for Bitrise
    and how flavour dependent unit tests are managed. In most cases,
    I would recommend you to run only what you need. But I came to conclusion that ‘run only what you need’ could be counterintuitive in our case.

    It’s really easy to break one of the flavours by introducing changes to only one of them. That’s why we ended up with running unit tests for every flavour in every build. In addition, the above mentioned set of flavour agnostic tests is triggered. What does it mean when it comes to Bitrise CI setup?

    Targeted unit tests per flavour. Source: this gist

    The above snippet runs unit tests for the app module for a particular flavour injected as an environment variable and a particular build variant.
    So, if CI builds only one flavour at time, this snippet is supposed to be triggered three times, once for each flavour. If all of the flavours are built simultaneously, then each flavour should run its own unit tests in order to avoid redundancy and save a few minutes from build. Notice that, before _UnitTestsPerFlavour step, UnitTests_Flavour_Agnostic_Modules step is triggered. It runs flavour agnostic tests, so domain, data and feature modules unit tests. Either way, all unit tests are always validated.

    Alternatively to the above setup, you can use the following setup to hardcode which flavour’s unit tests should be run:

    That way we’re all covered, no matter if we’re building all the flavours, or just one. Remember, the lesson here is flavour dependent and agnostic unit tests should be triggered once. There is no redundancy, but there is full coverage. Every software increment is safe.

    Results

    We started with around 30 minutes per build.

    Total time for unit testing then. Total time for unit testing then.

    And finished up with the below results when running one flavour.
    Down to ~5/6 minutes per build.

    Total time for unit testing now. Total time for unit testing now.

    And also down to ~3 minutes per build when running all the flavours
    at once, which means each flavour is responsible for its own unit tests finally. Yes, that’s a separate config in order to optimise build time even further.

    Total time when running all of the flavours. Total time when running all of the flavours. Build time for those unit tests per one flavour.

    Artefacts Deployment

    Review your Deploy to Bitrise.io step. According to the documentation [1] [2] [3] for the following steps test reports are deployed automatically:

    ```

    • Android Unit Test
    • iOS Device Testing
    • Virtual Device Testing for Android

    As noted in the documentation, by default Android unit and UI tests are deployed to Bitrise directory and are provided via the Test reports tab. They are easily accessible — but the question is — are they really necessary?

    We have robust unit tests. They fail rarely in CI because the entire team writes and runs them frequently. On the other hand, it’s easy to check Bitrise for which logs failed.

    Deciding what to deploy

    We already changed from Android unit test step for Bitrise to Gradle Unit Test step which does not deploy unit tests reports automatically. And we want it that way. What about the rest of the artefacts? For automation builds we’ve decided not to deploy any APKs. They are not needed.

    We also already know that Virtual Device Testing for Android step deploys UI tests results into the Test Reports directory. We decided that for all of the builds we are going to move or remove Deploy to Bitrise.io step completely as an experiment.
    Also, Deploy to Bitrise.io step is always triggered before unit tests but after APK creation. That way, only application (uatRelease APK for example) and the UI tests report are deployed.

    Initially deploy to Bitrise.io step took from 2.1 to 3.2 minutes.

    Initially 3.2 min was total time per this step. Initially 3.2 min was total time per this step.

    After the changes it’s 0 minutes for some builds. It is ~8 seconds for most of them.

    That’s how quick it could be! That’s how quick it could be!

    Oh yeah! Oh yeah! Source: Know Your Meme

    Automation workflow

    One of the low hanging fruits was to change what is being done
    as a part of a particular workflow, since they all have different goals.
    As I mentioned, we have feature, automation, develop and release workflow.
    In our case, initially, all of the mentioned workflows had basically
    the same setup. Why is this wrong? Because, as we said, workflows simply have different responsibilities.

    Understanding the differences in workflows

    I have already mentioned the automation workflow. It’s because it is special compared to other workflows. The only responsibility automation workflow has is to support Software Engineers in Test in writing and securing automation test suite. That simple conclusion means we can trim
    several steps from it; in our case, APK and other artefacts creation
    and deployment. We were also able to get rid of custom scripts we had there for the release app or “runtime” resources optimisations steps and beyond.

    Results

    By doing this, the automation build is a fast feedback loop for the SETs.
    It takes around 10 minutes less than other builds.
    I believe it’s a huge win for the SETs team.

    Investigating tools configuration

    Here is a quick and simple story as an example. Our builds produced uatDebug and uatRelease APKs. UAT stands for ‘user acceptance testing’ and it’s also a name of one of our environment s— environment with almost production setup but more over development data — and simply used for testing purposes. So, producing those two build sounds about right, doesn’t it? I started asking questions anyway. We were sure we need uatRelease for testing purposes. It makes sense since testing production ready app (release) using development data (uat) is one of the best practices. But why do we need uatDebug then?

    Trimming unused resources

    The sole reason was a misconfiguration of the Charles proxy tool, which led testers to not being able to use proxy tools while testing uatRelease build variant. Famous network_security_config file had been added to the project but it wasn’t working, since the build variant has to be debuggable. The quick fix was to add android:debuggable attribute to all uat builds. And since we’re not testing uat builds using any public channels — it’s secure enough.

    Results

    A simple configuration fix to the existing toolset brought an 8 minutes time reduction to each build and fixed SETs headache.

    All numbers together

    In summary

    • Unit tests time down from 30 minutes to 3~6 minutes. Depends on build type.
    • Automation build cut off by another 10 minutes through removing a few unnecessary steps.
    • Artefacts deployment reduced from 2.1~3.2 minutes into 8 seconds!
    • Fix to Charles configs gave us another 8 minutes — due uatDebug build removal.

    We were able to shorten builds by between 48 minutes and 34 minutes
    per each build
    . That was a huge win and relief as you can imagine!

    We obviously made some rookie mistakes. But the most important part
    is to learn from them. We were able to adapt quickly and we’re providing other small improvements since then. It can’t happen on a daily basis because we also need to deliver business value to our clients — but with an appropriate plan in place, I’m sure you can do even more.

    Tips and tricks beyond optimisations

    Bitrise and its plugins’ documentation is quite limited. You will need to deep dive into the plugins code if you want to understand the platform fully. Plugins code is mostly open source — you can find links inside plugin documentation. In particular, review the main.go file if you’re looking for attributes and parameters which could customise the build.

    Use Bitrise CLI in your terminal in order to test configuration locally. It will save you a lot of time.

    Have as granular CI steps as possible. Use title attribute extensively. Greater readability — greater control over time. Solid foundations are the first step for future optimisation.

    Do what we haven’t done yet — introduce tools to measure build metrics automatically.

    Leverage version control since Bitrise is similar to infrastructure as a code.

    That’s a separate story but in Tigerspike, we optimised APK size by 13% during our internal hackathon day. You should be aware of best practises for Android app configuration. Get rid or optimise resources, configuration and APK size. These kinds of things are also impacting your build time: git pull, compilation and build time, tests, deploy time — these are some of many examples.

    Listen. Observe. Experiment. Formulate a plan and adopt only what’s needed for your team. Good luck!

    Donald Duck says Thank you! Thanks! Source: 123emoji.com

    I hope you like this piece. As you can see I love a fast feedback loop —
    if you have any objections, comments or questions — please drop a comment or DM message.


    How to start writing reusable components for Android apps?

    The purpose

    For anyone, like myself, interested in building custom, reusable view components for an Android app. And those who had problems with finding good guidelines on the topic.

    I will provide the basic reasoning behind architecting custom reusable views. There won’t be implementation details (code) for many reasons, 
    one of those is my trial to focus on the concept rather than technicalities.

    Anyway, I hope that after reading this article you will be able to apply those principles to any interface architecture pattern such as MVP, MVC, MVVM 
    or MVI.

    The first part says why and when adapt such technique. 
    You can jump to the second part for the list of recommendations.

    First, ask yourself a question: Do we need reusable components?

    IMHO you should be a part of a bigger, long-lasting project to consider doing so. I understand it’s hard to draw a line between MVP/short-term 
    and a long-term project sometimes. It is sometimes also hard to let go of technical nicety such as custom views but please be mature to do so. 
    The following list should help you making a decision.

    YAYS

    • If you’re in a flexible environment where design decisions are done based upon many perspectives (not just your own) so you can pitch for good UX/UI.
    • If you see one (or more) view is used (will be used) in many places (reused). Plan for it. Those views should have fairly similar UX/UI to serve as a reusable components. Otherwise, reusing them will be challenging. Again, pitch the team to sustain consistent user experience.
    • If your teams plans to build extremely custom view like karaoke, 
      piano keyboard, fancy animated overlays, etc.
    • Finally, I have not tested this theory but IMHO if you’re bored and you feel like making an old legacy project interesting. 
      Since writing reusable components is like writing complex new features.
    • If you’re able to deliver business value by creating such components.
    • If your team plans major refactors and you have time to just play with the product.

    NOPE

    • POC or MVP cases. Although, MVP cases could be a good playground.
    • Small, simple app cases. There are situations where you know an app will be coded once and left as is. BTW Consider going cross-platform with those. Again, nice playground.
    • An inflexible environment. Design system gave to you up-front instead 
      of being discussed with all parties involved.
    • Your codebase is not ready to handle reusable components. 
      Please see Prerequisites section at the very end for few tips. 
      In this case you might have to put some work beforehand in order to enable reusable components.

    Ok, so now you’re sure. Let’s go with recommendations!

    1. Custom views

    It could be quite obvious for some and not necessary for some others 
    as I noticed. Deeply research the case of custom views first and try to imagine them as reusable and self-contained components. Don’t be afraid. 
    There are plenty of materials to learn how to create simple custom view. 
    Remember, it doesn’t mean you’re responsible for rendering everything yourself. It doesn’t mean you have to create those components from scratch neither. It will be the better and simpler version of what Fragments suppose to be.

    Think of a custom view as a container (f.ex ViewGroup like FrameLayout) 
    for other components grouped together to serve one purpose
    For example list of results used in couple places, part of product detail screen, or maybe just a grid product item used on different screens of your app, 
    or a search-bar and corresponding list of results. In the Toolbar/bars case 
    be careful. We had a lot of work with extracting Toolbar out of custom view after we realised Toolbar should be a separate, reusable component itself. 
    The important parts are reusability, testability and independency
    Decouple your view and controller from everything else, more on this shortly.

    I would suggest to read about Atomic Design in order to imagine the building process better. Basically, think of an any widget like button or label as a basic building block for bigger component. No matter if custom or native one. 
    Then imagine your custom, reusable view (imagine now list of results with reload button) and just compose those two views together. Handle composed views through your view controller or/and communicate through business logic (f.ex in a reactive way but that is not an only option) and you’re good to go.

    Please remember, it is complex stuff. Probably the first place you will choose for refactor will be the wrong one. It was the case in my case. 
    My tip would be to go with the simplest view first.

    DON’TS
    Fragments inside “main” fragment.
    Parent-child relation between views or view controllers. 
    Toolbar as a part of custom view. 
    Navigation bottom bar (iOS Toolbar) as a part of custom view. 
    Coupling. Activity/Fragments does setup for view.

    DOS
    Custom view that serves one purpose
    Activity/Fragments just render/show/hide view.

    2. Use platform-native components as much as possible

    Speaking of custom views, don’t go fully custom if not forced to. 
    Reusing existing components and widgets just makes your life easier 
    and enables you to work on actual, unique to your app business value. Work alongside with product/UX/UI/your own team to build native-first design systems. In most cases there is no need to go completely custom 
    and I’m sure your client is more than happy to get rid of an animation or fancy screen transition just to get business value on the place
    Ours clients are #trueStory.

    What do you mean by native components?

    Google did a great job introducing Material Components — guidelines with the corresponding implementation you can reuse in your application.

    You could also consider not-views related components like Navigation Component which plays nicely with bottom nav bar f.ex. and could simplify your codebase.

    You should take a look on how to compose and reuse themes and styles for your application, here is a presentation from Googlers on the topic.

    But in the end just reuse old, good widgets and layouts to get things done.

    3. Use modern app layer (view layer) designs patterns

    Yep, and not only for the app layer. Plenty articles on that matter. I recommend you to read Hannes Dorfmann’s story on MVI since a lot of useful concepts and ideas are there. He elaborates on topics such as how to organise, test, reuse and maintain your code which is not a part of this article but those two topics are strictly related. On the other hand I’m not a fan of tools like Mosby or Moxy because I painfully found out they are rather constraining. Inheriting behaviour, state or plugins from base classes was catastrophic in our cases. Refactors and building custom behaviour for new screens were messy tasks and required much effort. Instead, use lifecycle aware components (again, see Android tools) or setup your architecture the way that you can call your business logic anytime. Some people calls it overhead. I would call it simplicity especially most of us happen to use caching.

    4. Use dependency inversion frameworks extensively

    Nowadays, there is world beyond Dagger. Dagger itself offers quite good support for Android apps also. You can leverage dependency inversion principle alongside with framework of your choosing to inject views the same way you are injecting other collaborators like controllers, use cases or data sources. That way you’re decoupling actual view from the place it is rendered.

    The great news is your custom view is View implementation which means your Fragment or Activity doesn’t have to know what is rendered inside. Providing View subclasses enables you to use many custom views interchangeable as particular a component. Voila! Basic A/B testing setup done. Based on condition X you’re providing View Y or Z. Or you want to support multiple apps with one codebase. No worries, just inject different views for different flavours. There is much flexibility in this approach.

    The only thing that Fragment or Activity need to do is to add view 
    to the layout (programmatically or through inflated xml if you want to 
    be direct). Don’t over-engineer it since you cannot escape from those 
    few lines. Just show/hide/gone view first based on the state that must 
    be rendered on the screen, and then optimise if necessary. 
    We’re building an app for Android OS 6+ so obviously those devices are quite fast devices. We have not encountered any glitches.

    Oh, and really focus on cleaning up dependencies declarations like Dagger Modules. If you don’t understand it — spend time on it. Leave readable 
    and understandable legacy to you colleagues. Piling up dependencies are the worst enemy of a clean dependency graph, could lower performance and definitely causes headaches while coding the solution.

    5. Infamous parent-child dependency

    Don’t introduce parent-child dependency in case of custom views. They really need to be testable and independent. You don’t want to create/mock parent view or controller in order to test a child. And you would need to create or mock such instances for UI or unit test eventually. Don’t introduce hidden dependencies like setup of a custom view partially delegated to the Activity/Fragment or parent view. And definitely don’t inject your Activity/Fragment to a custom view which could lead to cycle dependency and memory leaks. That would also mean other developers will not understand how to use your component. It’s not in reusable fashion.

    The same goes for view controllers (Presenters, ViewModels, Models, etc). Don’t introduce parent-child link between them. Navigation and analytics are the common mistake — just inject those (hopefully) reusable components to custom view controller instead of calling methods from parent controller in order to do any job like analytics tracking.

    In summary, what was the effect of aligning to those principles…

    Simply put, reusable components which could be used anywhere in the app with only small changes to the codebase required. By architecting those properly you should be able to decouple from Android lifecycle hell. 
    TBH we still sometimes need to sync data onResume() or onPause() somehow. That is why we created an abstract class CustomView with onForeground() 
    and onBackground() methods which are then implemented by custom views. We have done it for simplify, since it’s not breaking good architecture rules. We’re still doing our research on this topic. We know some tricks including lifecycle aware components but in the end our goal was to decouple from lifecycle! I will update this article when and if solved.

    So, for example, by creating a list of results which was decoupled completely from other components we were able to switch views inside the screen in couple of lines enabling A/B testing and easy multiple flavour handling. It is also in the testing stage, so stay tuned. I’m definitely eager to edit this or write new article after long-term usage of the solution.

    The way to achieve that wasn’t easy. We talked to the client and other devs 
    a lot in order to simplify and learn what is really needed. 
    We started refactor in one place and then thrown away code. And started again because system and requirements were quite complex. But that enabled us to understand business requirements and technical dependencies which opened new perspective on the solution. We knew how to simplify.

    Prerequisites

    I assumed your codebase is prepared for custom views in a way that many other collaborators are already reusable, testable, independent components. Or at least you’re able to inject them where needed. That would mean Analytics, Navigators, business logic, sensor handlers components and much more. That really depends on your application and you need to resolve it for yourself. Hopefully, by using the above mentioned principles as a reference. Although, there are few basic rules:

    • Your business logic is separated from views and view controllers.
    • Your business logic can be easily injected and reused.
    • View controllers are decoupled from views and how views are rendered.
    • View controllers are framework agnostic.
    • View controllers are easily injectable to views.
    • Views are are easily injectable to Fragments/Activities.

    DISCLAIMER

    That is my story behind architecting reusable components for an Android app. I’m not saying it is silver bullet nor the best practise. Still, I consider above rules as a great approach to organise our system. 
    It was the first iteration we did in Tigerspike as a part of the larger project so stay tuned for more! Big shout out to my colleagues as they made this article possible. :)

    PS

    I would love to see your comments, ideas and improvements as I’m looking to improve and share knowledge as much as possible. 
    If you want to reach me out I’m based in Wrocław, Poland. 
    From time to time I’m visiting London. 
    Here is my LinkedIn and Twitter.