Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible memory leak? #133

Closed
MasterGroosha opened this issue Jul 1, 2019 · 15 comments · Fixed by #251
Closed

Possible memory leak? #133

MasterGroosha opened this issue Jul 1, 2019 · 15 comments · Fixed by #251

Comments

@MasterGroosha
Copy link

MasterGroosha commented Jul 1, 2019

Describe the bug

I'm running WriteFreely v 0.9.1 on my Scaleway instance with 2GB of RAM. It's a single-user federated instance with SQLite as backend. I'm managing it via systemd with the following service description:

[Unit]
Description=Write Freely Instance
After=syslog.target network.target

[Service]
Type=simple
WorkingDirectory=/home/groosha/writefreely
ExecStart=/home/groosha/writefreely/writefreely >> /var/log/writefreely.log
Restart=always

[Install]
WantedBy=multi-user.target

The only modification I made is replacing one of "letter-avatars" with a custom one to see a proper profile pic on Mastodon.

Symptoms

If I run WriteFreely for a long time (let's say, a month) without even touching it (no users, no new posts, nothing), it consumes almost all free RAM on server, causing other apps to have problems.
For example, my Python scripts which are running as cron jobs and regular processes, raise some exceptions like RuntimeError: can't start new thread or vedis.cpython-35m-x86_64-linux-gnu.so: failed to map segment from shared object. Also running htop shows that writefreely process consumes ~60% of RAM:
изображение
(notice that I'm running only one process, but htop shows all threads)

After I restart writefreely with systemctl restart writefreely everything gets back to normal:
изображение
(here I enabled "tree" view of processes, showing that I have only 1 process of writefreely and others are most likely threads)

My bad I didn't write down Go internal stats from /admin page, I guess it would help a lot.

Expected behavior

RAM usage should depend on active users or API usage, not increase as the time goes by.

Application configuration

  • Single mode or Multi-user mode? Single mode
  • Database? SQLite
  • Open registration? No
  • Federation enabled? Yes
  • Operating system? Ubuntu 16.10 (oops!)

Version or last commit: 0.9.1

@MasterGroosha
Copy link
Author

I've also noticed that despite ExecStart=/home/groosha/writefreely/writefreely >> /var/log/writefreely.log command in Systemd service description, there's no such file /var/log/writefreely.log 🤔

@thebaer
Copy link
Member

thebaer commented Jul 3, 2019

Thanks for filing this @MasterGroosha, we definitely want to look into this. For now, if anyone with experience with Go profiling tools wants to help out, that'd be a huge help!

(As for the logging, I'd suggest updating the Systemd service to the current config shown in the Getting Started guide.)

@prologic
Copy link

prologic commented Jul 4, 2019

Are we sure its not a goroutine leak? In my experience thats usually more common.

Terminating the process with SIGQUIT ^\ after some runtime amount of time might shed some light.

@prologic
Copy link

prologic commented Jul 4, 2019

I've also noticed that despite ExecStart=/home/groosha/writefreely/writefreely >> /var/log/writefreely.log command in Systemd service description, there's no such file /var/log/writefreely.log 🤔

That doesn't look quite correct. ExecStart AFAIK doesn't invoke a shell nor does it log to plain text files. You want to use journalctl to view logs (which are captured and stored in binary form) and requires systemd tooling to view.

@prologic
Copy link

prologic commented Jul 4, 2019

I can also confirm this memory leak in my own instance (I don't recall which version specifically) with this Prometheus query:

container_memory_usage_bytes{container_label_com_docker_swarm_service_name="blog_blog"}

Screen Shot 2019-07-04 at 12 39 41 pm

@prologic
Copy link

prologic commented Jul 4, 2019

@thebaer If you add instrumentation like in https://prometheus.io/docs/guides/go-application/ this will help debug this and monitor instances runtime behaviour more reasily. I'm happy to put up such a PR and documentation for this (more of an ops features than a user-facing one -- but a Admin UI could be developed to pull in some useful data/graphs too).

@MasterGroosha
Copy link
Author

I'm preparing for migration, so WriteFreely instance will be stopped. Now it consumes 13.7% memory with the following stats from admin webpage:

WriteFreely v0.9.1
Server Uptime 6 days, 1 hour, 23 minutes, 37 seconds
Current Goroutines 9
Current memory usage 247 MB
Total mem allocated 513 MB
Memory obtained 273 MB
Pointer lookup times 0
Memory allocate times 5057914
Memory free times 2631677
Current heap usage 247 MB
Heap memory obtained 255 MB
Heap memory idle 5.9 MB
Heap memory in use 250 MB
Heap memory released 0 B
Heap objects 2426237
Bootstrap stack usage 576 KB
Stack memory obtained 576 KB
MSpan structures in use 4.5 MB
MSpan structures obtained 255 MB
MCache structures in use 3.4 KB
MCache structures obtained 16 KB
Profiling bucket hash table obtained 1.5 MB
GC metadata obtained 10 MB
Other system allocation obtained 881 KB
Next GC recycle 492 MB
Since last GC 92.8s
Total GC pause 0.9s
Last GC pause 0.000s
GC times 3837

@thebaer
Copy link
Member

thebaer commented Jul 7, 2019

@prologic That'd be great! Please feel free to submit a PR. At least one user has requested this feature, and I think it'd be a great addition beyond solving this issue.

thebaer added a commit that referenced this issue Jan 29, 2020
This gets rid of the gorilla/context dependency, which might have been
causing a memory leak.

We noticed some serious memory leakage on Write.as that seemed to point
to this library. One heap snapshot:

      flat  flat%   sum%        cum   cum%
  259.13MB 30.41% 30.41%   268.13MB 31.46%  net/textproto.(*Reader).ReadMIMEHeader
  105.71MB 12.40% 42.81%   105.71MB 12.40%  github.com/gorilla/context.Set
   78.53MB  9.21% 52.03%   125.53MB 14.73%  github.com/gorilla/sessions.(*Registry).Get
   55.51MB  6.51% 58.54%    82.52MB  9.68%  net/http.(*Request).WithContext
   38.01MB  4.46% 63.00%    38.01MB  4.46%  github.com/gorilla/mux.extractVars
      35MB  4.11% 67.11%       53MB  6.22%  context.WithCancel
   34.50MB  4.05% 71.16%    34.50MB  4.05%  context.WithValue
      27MB  3.17% 74.32%       27MB  3.17%  net/http.cloneURL
      26MB  3.05% 77.38%       26MB  3.05%  github.com/gorilla/sessions.NewSession
      18MB  2.11% 79.49%       18MB  2.11%  context.(*cancelCtx).Done
   16.50MB  1.94% 81.42%    16.50MB  1.94%  syscall.anyToSockaddr
      14MB  1.64% 83.07%       47MB  5.52%  github.com/gorilla/sessions.(*CookieStore).New
   13.50MB  1.58% 84.65%    51.51MB  6.04%  github.com/gorilla/mux.(*Route).Match
   11.67MB  1.37% 86.02%    13.21MB  1.55%  regexp.(*Regexp).replaceAll
    9.72MB  1.14% 87.16%    22.94MB  2.69%  regexp.(*Regexp).ReplaceAllString
    9.50MB  1.11% 88.28%   115.21MB 13.52%  github.com/gorilla/sessions.GetRegistry

With the help of these articles, we tracked it down to this dependency,
and upgraded the library, which seems to have completely fixed the issue
so far:

https://rover.rocks/golang-memory-leak/
https://medium.com/@walterwu_22843/golang-memory-leak-while-handling-huge-amount-of-http-request-35cc970cb75e

This should fix #133
@thebaer
Copy link
Member

thebaer commented Jan 29, 2020

We started noticing memory issues on our Write.as instance and, using Go's pprof tool among other things, tracked it down to an old dependency.

If you all wouldn't mind testing out the fix in #251, I'd love to hear whether or not it fixes the issue for you!

@MasterGroosha
Copy link
Author

Thank you, @thebaer!
Unfortunately, I'm not well experienced with go/node.js, so I think I'll have a hard time building Writefreely from sources. Would you mind building a Linux amd64 build please so that I can simply deploy it on my server?

@thebaer
Copy link
Member

thebaer commented Jan 31, 2020

Sure, no problem. Here's the build for this branch: writefreely_0.11.2-88-g8fce34b_linux_amd64.tar.gz

@MasterGroosha
Copy link
Author

MasterGroosha commented Feb 3, 2020

@thebaer I've just started to deploy the version you've built for me and I've got a strange error.
ERROR: 2020/02/03 14:58:54 app.go:568: migrate: near "(": syntax error

My config:

  • Production, behind reverse proxy
  • Local port 8085
  • SQLite database
  • writefreely.db filename
  • Single-user blog
  • Federation enabled
  • Federation usage stats Private
  • Instance metadata privacy Public

When this error appeared for the first time, I decided to try again with the same config and it said "Database already initialized". Trying to start Writefreely results in error:
ERROR: 2020/02/03 15:03:46 main.go:119: init keys: open keys/email.aes256: no such file or directory

Upd: my bad, I forgot to run --gen-keys action. Though that "migrate" issue is still valid, I guess.

@thebaer
Copy link
Member

thebaer commented Feb 5, 2020

Ah, our latest migrations don't work with SQLite -- sorry about that. We should have that fixed in the near future, but it's probably best to avoid this version in production for now if you're using SQLite.

@MasterGroosha
Copy link
Author

@thebaer by the way this version seems to be working (at least, I managed to deploy and create a post).
So this is not a breaking bug for me. I'm currently running Writefreely in mostly "idle" mode to see if any memory leaks occur. So far so good.

@thebaer
Copy link
Member

thebaer commented Feb 5, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging a pull request may close this issue.

3 participants