Look into optimizations for indexed list call for paperwork
Before some of the recent improvements to loading to take better advantage of our system resources, loading would take upwards of 10s-15s for a page of results for the list view. To help remedy this, I switched to use parallel loading as well as reducing the max page size to 25. This has gotten the performance for cache miss calls to ~3s-4s. The biggest issue we seem to have is the performance of having to lookup each of the users associated with the call as it needs to convert the Drupal ID to the real user before outputting the results. Once this is cached, it only takes 20-30ms to respond, so repeat calls shouldn't make the system slow to use in that context.
To make sure that the remaining issue with responsiveness is purely the user lookups, I will do some testing locally to get some timings of different requests and parts of the initial fetch of the data to figure out where we can make improvements. I'll post this as a comment on this issue later today.
There are 2 ways we could look into helping address this to, one of which I'm not sure of the actual gains we would get, and a second that would fix the problem but might be overkill. The first way we could resolve this is I can look into how the parallel loading works under the hood, and attempt to give it more resources. Even though I've greatly reduced the impact of loading users, I think there's at least some performance still left on the table that we can extract if I poke around a bit more.
If we need this to be higher performance, one way we could address this is to make a startup cache that loads all of our committers into the cache and keeps it there and fresh. It would resolve our call time in exchange for cluster resources, though I'd need to test to see how much RAM this would consume in this case.