Thinking about the process of log management.
In the old days we invested energy in coding to pre-determine log levels of messages, which are CRITICAL which are INFO and which are DEBUG. As time went on it became easier (the tradition was) to invest less time during the coding stage. For example, just mark all logs as INFO then later, monitor the log files. Based on frequency of messages start downgrading the log messages. Eventually the usage of the application would self-determine the log levels.
A similar thing happened with SQL queries where by monitoring the slow-query log focus was brought to the engineering team on where to optimize queries.
These more agile approaches are self-organizing multi-scale systems. Log messages that are frequent are communicating low level activity, bits and bytes, while infrequent messages communicate higher-level concepts, application level ideas. Separating the different levels is crucial for clear communication.
This process reflects a learning process, one that is adopted in machine learning.
However, the danger is that for some reason the data science world is flattening the levels, removing the inherent scale dependent information. This occurs when data scientist construct a monolithic model for solving a problem, a single classification for multi scale data. The analogy would be, removing the hierarchy of log messages (no more CRITICAL, INFO, DEBUG) and replacing it with a judgmental statement, these log messages are 'good' this are 'bad'.
--
What is an example of scale dependent value system in software/system management?