We all had a hectic but unfulfilled day. What can be learned from the principle of computer solutions? The answer is that we can at least interrupt ourselves.

Computer scientists – Can anyone who studies the principles of computers and programming help solve human problems, for example, too much to do and not enough time available?
Here’s a proposition made in a new book, Algorithms to Live By, by Brian Christian and Tom Griffiths. This is an idea that is attractive to any economist. We tend to think of day-to-day decisions as a branch of applied mathematics, as well as computer science.
To be precise, the use of computer science and the use of computers are not the same thing. Computing scientists have devoted decades to solving problems such as organizing information, prioritizing, and networking. Many of the algorithms they developed for computers are also applicable to humans. After all, the algorithm is not a computer program, but a structured step-by-step method, similar to a “cookbook.” (The term “algorithm” was named after the 9th century Persian mathematician Al-Khwārizmī, but existed thousands of years before his research.)
So, what is the best recipe for completing a to-do list? Maybe easier than you think: Doing things on the list in any order, because the total time spent last is the same. This is a somewhat talented suggestion, but it seems to suggest that computer science can never inspire us when we have too much work to do and feel stressed and confused.

Or I used to think so. Then I read a paper published by computer scientist Peter Denning in 1970 that described a problem computers might encounter when working with multithreading. Most computers actually can not really multithreaded; instead, as humans do, they quickly switch from one thing to another. The computer quickly switches between tasks such as updating Pokémon games on your screen, downloading more videos from the web, checking if you have tapped the keyboard or moved the mouse, and many other processes. But even computers can not do unlimited work at the same time, and once a certain limit is reached, a disaster can happen.
The problem stems from the use of accessible “caches” to store data. Think of it as “caching”: Imagine a pianist playing two or three pages of music in front of her. These scores are stored in the highest speed cache. There are other notations behind the score, which will be read in a moment. In addition there is a larger but slower cache: the score on the bench, the more upright scores on the attic and the more on the music store. There is a trade-off between information storage and read speed.

If the pianist plays only one complete piece of music at a time, there is no problem with this setup. However, if she is asked to change a tune every minute or so, she will take some time to remove the score on the piano stool. If she had to change a song every few seconds, she would not be able to play; all of her time would have been used to change the score on the music stand and in the stool.
This is the same as the computer’s cache: There is a hierarchy – from the microprocessor’s own super-speed memory, down to hard drive (slow) and off-site backup (very slow). To speed up, the computer must copy the data needed for the current task to the cache. If the task is toggled too frequently, the machine uses all the time to copy the data of one task to the cache, then toggles the task, clears the cache, and deposits the new content. In the limit, nothing can be done. Tanin described this regrettable state as “thrashing.”

We all had nothing to do except one, and switching from one task to another was virtually impossible. Can we learn from the computer solution? The most straightforward way is to change to a larger cache; unfortunately, it is easier for computers than humans.

The obvious alternative is to reduce task switching. Computers use “interrupt coalescing” techniques, which combine multiple small tasks together. A shopping list helps avoid unnecessary trips to and from the store many times. You can also put the bills together and deal with them every month.
But we often find it hard to switch from one task to another. One reason why computer science believes this agony is that there is a trade-off between quick response and a chunk of time to increase productivity. If you want to reply to your boss’s mail within 5 minutes, you must check the mail at least every 5 minutes. If you want to quit the net for a week to write a novel, then your response time must be slowed down to a week.
Any solution should recognize this trade-off. Determine an acceptable response time, and then interrupt their work accordingly. If you think there is no problem with replying to a message within 4 hours (no problem with most criteria), you only have to check the email every 4 hours instead of every 4 minutes. As Christiane and Griffiths suggest, decide how you want to respond. If you want to do a good job, do not exceed the response standard.