talk about architecture or design en
Recently, I came across a statement that resonated with me.
Complexity doesn’t disappear; it only shifts.
Some people might harbor the illusion that through “technological” means, certain functionalities can be simplified and certain risks made “manageable.” However, the reason I call this an illusion is that, in the absence of issues in the business logic, operations that appear impressive often, to some extent, sidestep the details one should face in the project process.
These seemingly impressive individuals choose to be couch potatoes.
Why Microservices are Good
The reason microservices give a sense of positivity is that they fundamentally acknowledge something: developers of an entire service environment need to concern themselves with the interaction logic/load design/overall effects of each node and actively share the responsibilities of the operations department.
- Responsible for API effectiveness, introducing rate limiting and circuit breaking
- Responsible for the overall network egress and business organization, Gateway registry center
- Responsible for persistent monitoring and governance, various monitors
- etc.
Indeed, some of these things fall within the realm of traditional operations. But when complexity and development content can no longer be separated, there is no need to forcibly split them. Confronting problems and finding the most appropriate solutions is the right path.
So, to some extent, under a microservices architecture, the testing department can no longer be responsible for testing the entire process. This means that the testing results are actually only responsible for the effectiveness of the current test environment. The problems faced by load testing results of five nodes in the testing environment and the same code with fifty nodes online are completely different.
By the way, this is also why I think some “QA fully responsible for the test environment” idiotic statements are not even worth debating. Because if the complexity is placed in the development department, QA does not need to be responsible for the complexity; they only need to be responsible for the delivery environment and feasibility.
So What is Architecture?
So architecture - as I understand it - essentially deals with the intertwining of complexity and variability. We need to separate the invariant and variant modules for iteration, while ensuring that complexity does not deform (placing complexity in unreasonable locations).
For example, in a trading module, order settlement flows belong to the invariant, while discount rules and the myriad of business corresponding to discounts are both highly complex and variable modules. Putting the discount calculation logic into the settlement module is a somewhat foolish operation. Because the originally stable process of the entire order becomes subject to frequent iterations due to the introduction of discount logic.
So, why are Lambdas (specifically referring to AWS servers) good? Because, to some extent, they create space for developers to think about complexity and variability and practice splitting, while hiding the complexity of transformation (which becomes a service fee).
So, functional programming and mandatory module orientation, isn’t this wonderful? There’s nothing better than this.
Of course, writing Lambda cleverly can also turn it into a mess, but that’s not the point of discussion.
At the same time, concealment and exposure are also an experiential science. Essentially, design patterns study how to expose variability and hide complexity. But this does not mean the transfer of complexity. Design patterns tell you how to turn complex logic into invariant logic and expose the variable logic in the form of interfaces. Thus, a good interface exposure can make subsequent iterations more stable because it never touches the part of the code with higher complexity.
Returning to the design of the trading module, ideally, the coupon module should expose a perennially invariant API to settlement, returning well-settled calculation results.
So, what’s the deal with low-code platforms?
Rant
According to the previous argument, if someone were to do the opposite, expose high-complexity logic and solidify low-complexity programs, theoretically, it would be something intolerable to anyone. Yet, there are indeed people who have done this, allowing users to write code on a webpage, simply because they cannot or are too lazy to abstract processes and split complexity.
This is clearly what was mentioned earlier, pretending to eliminate complexity through advanced rhetoric using technological means, and evading the most basic responsibility of programmers: abstraction, assigning human responsibilities to UI and storing computational responsibilities to CPUs or storage. They simply choose to do what they like and push away what they don’t.
Low-code stuff isn’t new, like formulas in Excel; perhaps Excel also integrates some language compilers where you can write some scripts. But forgive my limited imagination; I really can’t figure out what’s worth boasting about when it comes to putting a textarea on the web for users to write their own code. How about cutting off Excel one day and directly integrating with Access, is that a great invention?
So Why Do I Hate Triggers, Stored Procedures, or Materialized Views?
There will always be some logic that triggers need to face. For example, I need to do some lightweight summarization or transactions. But fundamentally, this kind of operation places the variable business complexity on the database, which should not be so variable.
The database provides triggers for various processing stages, and I don’t oppose using similar functions to do some small data patches. But when facing some overall, procedural projects, putting business logic blindly into triggers, in my opinion, is also irresponsible.
Just looking at the ETL process, the process focuses on the trigger timing of each JOB and the sequence of triggers, as well as the calculation logic in each JOB. They are complex and variable. Binding the entire process to a DB trigger chain or a materialized view chain will inevitably encounter scheduling that no longer meets requirements and lose control in endless iterations.
This conclusion is not my speculation. As early as around 2015, I knew that many companies were using MySQL stored procedures. In subsequent iterations, perhaps due to sharding or business evolution, the painful process of stripping away stored procedure logic turned to pure code implementation, introduced only when it was less rigorous.
However, after I started doing data development, users of ClickHouse indiscriminately use materialized views and proudly call it a materialized view chain. I don’t understand what’s so self-congratulatory about it. Whether viewed as streaming or batch, I don’t think it’s a good solution.
Of course, this part is also uncertain. I just see similarities with history when looking at the architecture diagrams of our company using various views to string up the ETL process. Just venting. Maybe in the end, this is the best solution? Heh heh.
