Achievement Unlocked!

Over summer, I was working towards a particular goal. I mean the Australian summer, so that was more than 6 months ago now. My plan was to put into the new site a feature that I had mocked up on the old site 2 years before. However even then the old site was becoming difficult to work on.

I actually implemented the feature with the server in Kotlin, and the web page in AngularJS. AngularJS has since died, and I am now using Angular 7 (which is sooooo much nicer!). Kotlin is a language I do love, but the architecture of the server was so 2015, so that code could not continue to live in the serverless world. And then when I did start writing serverless code, TypeScript on Node.js seemed like a better choice.

So anyway, what did I do? Well, it’s just the Rate of Play of Games New to You, for multiple users (which the page could already do), but now it’s EASY TO USE! So people can actually see that the page can do that.

I tried to add a few more people to the list, but I ran into a problem I haven’t encountered before – there was too much data for the Lambda to return! It seems there’s a 6 megabyte limit. After I return that data, it gets compressed before sending it back to the browser, to only about 10% of the original size, but the limit is before compression.

To tell the truth I’m a little nervous about the amount of data I send back, as it costs me money, so maybe it’s worth some work to make that data smaller.

This new feature is on the test system and is coming to the live system within 24 hours.

That’s the Way the Money Goes!

I’ve just figured out a new way to work the AWS Cloudwatch graphs, so rather than just graphing the number of Lambda invocations, I can graph the total duration. For Lambda, I pay for each invocation, but if an invocation goes long it counts for multiple. So I’m sort of paying for duration as well. I figured out how to graph total duration per Lambda.

This graph is for the last 2 weeks, and shows that inside-dev-processPlaysResult is taking by far the most time. That’s the one that takes plays scraped from BGG and writes them to the database. I’ll take a look at that code. It is a bit on the complex side, as it’s the bit that infers plays of base games from plays of expansions, but I can usually find something to optimise.

Looking at the same graph for the past 4 weeks, all we can see is the Kaboom! Everything else literally pales into insignificance compared to that bug. Cool!

Sorry I am having too much fun graphing AWS performance to graph board game stuff :-).

Cleanin’ Out My Closet

Nah, I’m not going to go all Eminem and aggressive and stuff. I’ve literally been cleaning stuff up today. It was a great, productive day. There were a couple of users that I added over a week ago, and before advising them that their pages were ready, I decided to check whether they were, and they weren’t. This is the sort of bug that cannot be tolerated. With 3034 users, stuff’s got to work without me watching it.

So I hunted down what the problem was, and discovered that I’d modified some SQL in a buggy way a couple of weeks ago, and then swallowed the error so that I never noticed. So I fixed the SQL and the users started being created.

But they still weren’t coming through properly, so I investigated further. There were half a dozen or so users who had deleted themselves from BGG, and so I was unable to process. Yet I kept trying to, every minute. So I deleted them. And then there was one user whose BGG collection is so big that BGG just tells me it’s too big. I’m not sure what to do about that.

You will notice in the graph below of Lambda invocations that there was a solid orange band at the bottom. That was just doing broken things over and over. Oh, and by the way, I pay for the height of this graph – lambda invocations cost some tiny amount of money. The right hand end of the graph shows how much the orange band decreased after fixing that stuff up. It will cost me a bit this month (like, a dollar), but next month it should be better.

These sorts of problems can’t be allowed to persist. So I wrote some code to send errors to the database. When errors happen in the hundreds of thousands of lambda invocations per month, I don’t necessarily notice them. If I write them to the database I can at least find them. With any luck I will find the next similar problem faster.

So then after cleaning that up, the new users started working properly, and I had a clear conscience. I then started emailing people who have been waiting to be added to the site to tell them that there was a new site and they were added. I emailed 280 people, some of whom had been waiting for 18 months. I hope they still play board games.

Anyway, whether or not they still remember who I am, it was nice to get 280 messages out of my inbox, and to have that weight of guilt lifted from my shoulders after such a long time. On the other hand, I’ve increased my potential active users by 280, and that might reveal some other problems. I don’t expect it will be too much, as the architecture I’ve chosen is nothing if not scalable, but you never know. The database is a non-scalable weak link, but I think the impact of users is trivial compared to the impact of the downloader.

And then, because I’m hyperactive or something (not to mention that the weather outside was a bit yukky, so I wasn’t tempted to do anything else), I updated my spreadsheet of ongoing costs. It was 3 months behind.

May 2019 shows a jump in Lambda costs, due to the Kaboom I blogged about previously. It was only $6, but it was an architectural problem that was going to stick around and cost more each month until I dealt with it. Hence why things like that get my attention sooner than actual useful features, and get blogged about.

The kaboom happens about every 35 days, which puts the next one in the first few days of July. Due to the dithering I put in, and fixing the huge database index bug, I don’t expect a big kaboom, just more of a tremor. And due to the continued effect of the dithering, that will become less every 35 days.

The next thing I’m hoping to work on is the update schedule page. It’s not a headline feature, more of a necessary evil. Also I logged a play from October last year, today, so I need it myself. And of course so does everyone from time to time.

Work also continues on the login stuff that I mentioned in the blog post about sticking the cookie. Now that the cookie is working, I need all the bits of code to use it properly, or they won’t be able to access user data. And then I want to write more code which reads and writes user-specific data so I can realise some benefits from all of that mucking around.

Auth0 tells me I have 138 users with accounts, which I think is pretty wonderful since having an account is of little use. But it’s supposed to be a feature, so let me make it that way!

So You Can Take That Cookie…

I’ve been working on the login button for a few days. This is not because I want to, but because I discovered that the way I was handling login was regarded as bad practice. When a user logs in, Auth0 sends me a thing called a JWT (JSON Web Token), which is effectively information about who that user is and what privileges they get. So I was getting that and storing it in browser local storage where other parts of the site could retrieve it later.

It turns out that’sbad, because third party code that I use on the site might look into the browser local storage and get the JWT out, and send it off somewhere else for Nefarious Purposes (TM). Well, we don’t want nefarious porpoises around here. So the better way todo it is forme to send the JWT to my server, and for the server to set a cookie reminding me of who you are. That sounds easy enough.

But oh goodness me, the drama! Because my site is extstats.drfriendless.com, and my server is api.drfriendless.com, which are different, they don’t trust each other unless I do all sorts of “yeah, it’s OK, they’re my friend” stuff in the code. That’s called CORS, and although it’s not so complicated, it’s just too boring to remember.

And you can’t do CORS and cookie stuff with the API Gateway Lambda integration (well, not very easily using serverless framework), you have to use the lambda-proxy integration. Which is OK, but it means everything in the code has to be much more explicit. So I did all that.

And then it still didn’t work. I could see the Set-Cookie header coming back from my server, but Chrome denied it existed. Firefox said it existed, but ignored it. So I poked around for a bit longer, and found out that if you set an expiry time on a cookie, Chrome throws it away. Why? I have no idea. It just does. So I have to set the maximum age for the cookie instead.

And then finally I got the cookie set. And by then I had kinda forgotten what I was trying to achieve. Like a chump!

So I think now the cookie is working as intended, but I have to change the code on the pages to use it properly. At the moment the user page (the one you get to if you click on your user name under the Logout button) is broken, and is awaiting the CDN’s pleasure to be fixed.

Overall I quite like this solution. I feel I have more control over where data is going, and I understand how it works. It has just been pretty painful to get to this point!

Fiddle Faddle!

I’ve been quiet for a couple of weeks, but I’ve been persistently working on the Plays by Month page. I recently added a couple of tables to the Plays page, that should have been on Plays By Month, so I moved them across. Of course it wasn’t quite a perfect match, and it turned out to be a lot more fiddly than I anticipated. There are so many numbers!

For example, on that page the “plays for a month” could mean the plays in the month, the cumulative plays forever until that month, or the cumulative plays from the start of the year until that month. There’s meant to be synergy between the tables, but it turns out there’s just complexity and confusion. Anyway, it’s done now, and I can get onto some more interesting problems.

Those tables from the Plays page will eventually be removed and replaced with other things that use the data that the page has.