Update memory reqs
This afternoon @linkfang was reporting errors in accessing the staging instance of the application, getting frequent Application not available
messages. Looking into it, on reaching a pod memory total of 270-280Mi, the pod would crash as it ran out of memory by looks, dump the application, and reboot to attempt a more stable run.
I upped the limit on staging to 512 as a test to give it too much so that we could dial it back. On running with this configuration there were no more crashloops for the server, and it topped out around 330Mi. With it levelling out, I don't think there is any sort of leak introduced, just the new feature pushed the memory minimum just over the current limit.
With this, I think the 384Mi target should give us a little headroom for growth as we add new features. Once approved, I can go modify the production deployment to match these specs. I plan on monitoring for a little longer to ensure we push a stable product to production, so the hotfix will be pushed at least a day.