Skip to content

Commit

Permalink
Use memoryconservative mode for WF CSV_MERGE
Browse files Browse the repository at this point in the history
When run as a workflow job/ert plugin this has been observed through
logging to fail due to lack of memory.

When run as a workflow job, it is probable that there is one CSV for
every realization that is to be loaded, thus the memory conservative
option probably makes sense to have on by default.

Keeping the faster option as default for usages as a forward model step.
  • Loading branch information
berland committed Dec 16, 2024
1 parent 378bb74 commit 42daa84
Showing 1 changed file with 3 additions and 1 deletion.
4 changes: 3 additions & 1 deletion src/subscript/csv_merge/csv_merge.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,9 @@ def run(self, *args):
args = parser.parse_args(args)
logger.setLevel(logging.INFO)
globbedfiles = glob_patterns(args.csvfiles)
csv_merge_main(csvfiles=globbedfiles, output=args.output)
csv_merge_main(
csvfiles=globbedfiles, output=args.output, memoryconservative=True
)


def get_parser() -> argparse.ArgumentParser:
Expand Down

0 comments on commit 42daa84

Please sign in to comment.