Implement reading of files in parallel chunks (most universally via future; e.g. future.apply::future_lapply()). If there are 100s or 1000s of files, the process of reading could for example be distributed over multiple cores to speed up the process. Key points (let's discuss):
- Reading single files per core at a given time has too much overhead compared to time it takes to read one file.
- Chunking could be done by splitting up the list of files into multiple groups according to numbers of cores registered (user).
- Each chunk of files is read separately and then recombined at the end.
Implement reading of files in parallel chunks (most universally via future; e.g. future.apply::future_lapply()). If there are 100s or 1000s of files, the process of reading could for example be distributed over multiple cores to speed up the process. Key points (let's discuss):