Rework Navigation

Rework navigation logic to a more centralized approach.
We are already working on it.

What is it

All logic in views/controller should be removed.
The navigation coordinator contains all the logic of WHEN to move to WHICH view
Views/controllers have a reference to the navigation coordinator
Views/controllers will push state changes towards the navigation coordinator
Views/controllers take actions and notify the navigation coordinator of this action
Based on this action (and the current state) the navigation coordinator will choose which view to push onto the stack
These rules are not set in stone of course. Pragmatism and good judgment will take precedence.

Plan of attack

For each flow do the following:

  1. Assume a flow passing views A -> B -> C -> D -> E
  2. Split the flow into three parts.
    • The start of the flow (A -> B)
    • The view you want to extract (C)
    • The end of the flow (D -> E)
  3. Wrap the start and end flows into LegacyFlowWrapper ([A -> B] and [D -> E])
  4. Create a Navigation Coordinator
  5. At the end of the first LegacyFlowWrapper push state and actions towards the navigation coordinator instead of navigation to view C directly: [A -> B] -> Coordinator
  6. Implement the logic of when to change to view C: Coordinator -> C
  7. Implement the logic of when to change to the second LegacyFlowWrapper: Coordinator -> [D -> E]
  8. In view C change the logic to push state and actions towards the navigation Coordinator instead of the next view.
  9. Now we have extracted view C from the flow.
    • [A -> B] -> Coordinator
    • Coordinator -> C
    • C -> Coordinator
    • Coordinator -> [D -> E]
  10. Repeat the same steps for each other view in the flow. Until there are no more wrappers and all navigation flows through the coordinator
    • A -> Coordinator
    • E -> Coordinator
    • Coordinator -> A
    • Coordinator -> E

Benefits

  • Reordering screens will be much simpler in the future.
  • Adding additional screens/flows is easier and faster.
  • Having an overview of which flows exists is simpler.
  • Streamlining the app and flow is simpler to manage.
  • Controller and view logic will be shorter, more to the point and simpler to understand and change.

Costs

Each node in the app needs to have their navigation logic reworked. (0.5-1day/node MAX) there are at most 30 of these nodes

Rework Database

Rework front end database to fleeting json-like storage. Keep the app sync agnostic

The goal of the local storage is to have the app work offline. However we are now doing all data communication through the database syncing mechanism of the database. This shouldn’t be the case.
First establish which functionality needs to be available offline. When offline, should we be able to:

  • like other peoples dreams
  • Create new public dreams
  • Link bank accounts to dreams

All actions taken while offline can cause potential conflicts when trying to sync back to the server. So we should minimize the amount of offline features as much as possible.
I also noticed that when the app starts we check if there are pending migrations. If there are, we simply delete the entire database and build it again. This again has potential issues with lost data.
After some further analysis it seems that at the moment we can’t create private dreams when offline. It makes sense that we don’t allow this but it does indicate that we need to challenge the actual reasons why the client wants their app to be available offline.

So there are two options here. Either we establish that the whole ‘available offline’ idea is broken by design. And we simply don’t implement it and remove it from the apps, making everything a lot simpler and cleaner.
Or we establish that the client indeed does want the offline availability and we discuss which offline features should be kept and explain the difficulties and issues.
I’ll do a cost/benefit analysis of both.

No offline functionality

Plan of attack:

  1. (iOS only). On iOS there isn’t an abstraction around the database in place yet. So we need to implement this first as an extra step. Android already has this abstraction. We want the business logic to be completely agnostic as to where the data is coming from. Database or backend. To do this we need to first create a data-manager, then all the sync/fetch calls should go through this manager.
    Now that we have our code data source agnostic we can start phasing out the database.
  2. Create a client that talks to the backend specifically. At first simply make it empty and implement some helper methods to do the proper REST calls.
  3. Take a call somewhere in the code that talks to our data source abstraction and make it talk to the client instead. Implement the calls needed to fetch/update/delete the data on the backend. Parse the response and return the result.
  4. To keep backwards compatibility we need to keep the database in sync with updates we send to the backend. So in the client (when updating) also implement a call to update the database. (add cleanup TODOs when doing this)
    The reason we want this (we need to investigate if that is actually correct though) is that when we have an update in one place in the code, and a fetch in another. Updating it in the new way, but fetching it in the legacy way will yield outdated results.
  5. Step by step all data-fetching/updating will pass through the client, and the database logic should shrink. When it is no longer used we can remove the redundant code and clean it up.
  6. When the database code is removed also remove all syncing logic from the client code.

Benefits

  • The app and data management becomes a lot more straightforward.
    • We can simply rely on the backend for data retention and we implement a simple HTTP cache to save network calls
    • We can ignore local data when we force-kill/reinstall the app and again rely on the backend as a single source of truth.
  • The chance of bugs because of data syncing is reduced significantly. A lot of complexity is taken out of the app. Which makes implementing new features a whole lot simpler.
  • We can get rid of almost all of the delegates/promises/async stuff in the app which is the biggest chunk of technical dept.

Costs

  • Data source abstraction (iOS only) (3days)
  • Basic Client implementation (1.5day)
  • Implement endpoints in the client (4h/endpoint?) +-100 endpoints (Note that I want this 100% unit tested. There can be no excuses here)
  • Move refactor existing calls to use client (1h/call?) +-100 calls
  • When needed implement calls to legacy data source manager (3h/call?) 100 calls
  • Cleanup of legacy data source code (1day)
  • Cleanup of todos in new code (1day)

Keep offline functionality

Plan of attack:

First we need to make very clear to the customer that it makes little sense in the general case to do this. When we have established that we need to define an exhaustive list of minimum features we want to make available offline.
Then we need to do a decent case study to see if there are additional technical issues with each offline feature individually.
When we have all this information and we agree with all the features, and it is technically possible to do it well (deal with raise conditions/desyncing/multi device editing/…) we can start making the change.

  1. Implementation wise I would still (as a MVP) implement everything as if we were doing case 1 (no offline features) because it will make everything else simpler. So do that first.
  2. Implement a caching service that allows to store object data in simple flat file storage. (unit test this vigorously)
  3. Create an annotation mechanism (doesn’t have to be exactly annotations but something similar) to tag certain functions in the client as cacheable.
    The caching service should take into account all params passed to the function in order to determine which cached object to return.
  4. Implement push to cache whenever a cacheable function is called. Update existing records in the cache with new data.
  5. Implement error handling/offline detection so that when the app goes offline we will fetch from the cache instead of the remote data source

At this point we have an offline read-only mode finished. If this is enough for the client it is prudent to stop here. Adding write functionality offline brings a whole lot of additional complexity
If we REALLY REAAAALLY want write functionality offline, and be able to sync it to the backend when we come back online we can take the following actions.

  1. Create a call queue manager that persists the calls to disk
  2. Create another annotation that denotes that marks methods on the client as dirty-offline write methods.
  3. In the client, whenever an update or delete operation is done instead of executing it directly add it to the queue
  4. Create a background process that consumes the queue and pushes these operations to the server. (this is assuming we don’t care about the responses of Update and Delete)
  5. Gracefully handle it when the app is offline so that the queue doesn’t empty

Benefits

  • Most of the benefits are the same as when we wouldn’t support offline functionality. However there are some additional drawbacks.
    • There will always be some edge cases that will never completely work issue free. Getting full offline functionality working perfectly is incredibly difficult.
    • There is a bunch of added complexity in the code that needs to be taken into account

Costs

Compared to the initial effort to make the data source manageable adding these features isn’t that much work. The reason being that it is new code we can start building clean from the start. Without having to drag a bunch of technical dept with us. However. It all together I still think it would take about a week or 2 taking all the potential issues into account

Rework general architecture “MVVM” -> …

At the moment we are saying that we work in an MVVM architecture. This may be the case in name, but it is not really the case.
Our views and models are overloaded with a lot of functionality that shouldn’t be in them making very large classes and unreadable/unmaintainable code.
MVVM is often misused. Especially for an app as small as DIDID we shouldn’t need MVVM.
MVVM is very useful for high-fidelity UI apps. Which require extreme responsiveness. The dual binding helps with keeping data and representation in sync.
However, DIDID doesn’t need this. And we aren’t using the dual binding anyway.

Remove Async/Delegates

Most of the promises should have been handled when we rework the local data storage. However there are other locations where redundant delegates are used. In general these are mostly code cleanups and not necessarily architectural changes. However we should still make time to clean these up as well. It is possible that there are underlying architectural flaws that have these delegates as a symptom. However it is not yet completely clear why or where the root cause lies.

Unittest coverage

During the refactors mentioned above we should always keep in mind that extracting code to a new service/module/class includes completely unit testing it. I wouldn’t really develop an action plan to tackle this. Instead I would like to enforce a minimum code coverage percentage during our CI build. We take a current state of the test coverage, and we will force the build to fail when the test coverage is lower than the build before. This way, whenever new code is written and not unit tested the build will fail because the overall coverage percentage will drop. Forcing us to write test for new code. When code that was tested gets removed the average coverage will also drop so new tests need to be written for existing code. Every now and again I would update the threshold to whatever the latest coverage percentage is so we can keep improving.
When we have a certain level of coverage and are somewhat happy with he overall architecture we can start making a focused effort on improving coverage. But we are a long way off.