The Beginning of the Endpoint

A forum for discussion of the front-page blog posts on the Wongery.
Post Reply
Clé
Posts: 111
Joined: Thu Dec 20, 2012 12:41 pm

The Beginning of the Endpoint

Post by Clé »

Ugh. Less than two weeks left till the hard launch, and, uh, yeah, the additional subspaces that I wanted to make are still missing, and I haven't gotten anything obvious done on any of the other things I've been saying I was going to do.

There are reasons for this. I'm still under a lot of financial stress. I'd mentioned before that things have been slow in the industry I work in. That's pretty much over now, but I haven't recovered from the debt and the depletion of my savings from that period; I've been worsking a lot lately, I just... haven't been getting paid. I'd said before that one disadvantage of freelance work is that if things are slow in the industry, I don't have a guarantee of a steady paycheck; another disadvantage, though, is that even things aren't slow in the industry, I still don't necessarily have a guarantee of a steady paycheck, because companies don't necessarily pay promptly, or on any sort of set schedule. I just finally received a six-hundred-dollar paycheck I'd been waiting on since mid-October, and when I say I received it, I mean it was finally sent and should be in my bank account in the next few days, so I still don't really have the money yet—though at least I know it's on its way. And most of my work this month has been for a single company, which in the past has always been good about paying twice a month, once at the beginning of the month and once midway through. But they seem to have changed their schedule (or just... stopped having a schedule), and I haven't been paid this month at all; they owe me more than four thousand dollars, and I have no idea when I'm going to get it. (Also, yeah, that loan I was hoping I might get is a definite no-go, though again in the long run I'm probably better off without it anyway.)

So, yeah, that's... not great. I mean, again, I don't think I'm in a really dire situation. I wish that company were sticking to its previous regular payment schedule, but they're reputable enough that I'm not really worried that I'll get the money at some point—I just don't know when (especially with the holidays imminent). Between that belated paycheck from October and a few hundred dollars I cashed out from a credit card rewards program, I should have enough to scrape by for the next few weeks. But if I don't get those four thousand dollars I'm owed before January, well, I'm not sure how I'm going to afford most of what I'd hoped to have for the hard launch. (Also I'll be late with my rent next month, which, uh, isn't ideal.)

But anyway, I didn't want to make this blog post just whining about my finances. I mean, I wanted to give a little background for why I've had a hard (or even harder than usual) time focusing on things, but it's not the case that I haven't been getting any work done on the Wongery. I just... haven't been writing any articles.

One thing that I've been spending way more time on the last few weeks than I maybe should have, given that it's a relatively minor part of the site, is designing Strike Engine cards for the Wongery. I guess I just want to make sure I have something in the Gamespace besides RPGs, and, well, while I have a lot of experience making RPG materials (even if I haven't actually published anything), I have a lot less experience making cards for CCGs, so I want to take time to try to do it right. I've even been fairly active in the Strike Engine Discord asking questions and looking for feedback; I hope I haven't been too much of a bother there. Anyway, given my increasing appreciation of the limited time available, my ambitions for how many cards I want to have done before the hard launch have been decreasing. Originally, I'd hoped to have full sets of cards for Dadauar, Curcalen, and Varra (in the last case more specifically for Thamarand, and more specifically still focusing mostly on conflict between the Five Masters—oh, I mentioned back in June that I was considering fleshing out the Five Masters of Thamarand a bit? That was why). Then I realized that was way too much, and decided to just focus on getting a set done for Dadauar—a big, three-hundred-something-card set with three main factions, for the onirarchs of the Free Republic, the resistance to the onirarchs, and the Bathybius. Then most recently I was persuaded on the Strike Discord that this was too much, and that I should perhaps first focus on just designing a fifty-five-card deck for Clash of Champions, a particular optional Strike Engine format, so I figured I'd try to make a deck for each of the three factions. Well, at this point, it's looking like I should be able to finish the onirarch deck before the hard launch, but the resistance deck is a maybe, and the Bathybius deck a definitely not. Still, even if I don't finish it by the hard launch, the Batyhbius Clash deck remains a part of my future plans—along with the rest of the three-hundred-something-card Dadauar set—and the Curcalen set—and the Varra set. Most of this may end up being pretty far in the future, though.

And of course I've been making progress in the Udemy courses I'm taking to (hopefully) help me develop the coding skills I'll need to make the MediaWiki extensions and customizations I want to make.

(You know, I'd been feeling a little guilty about not being able to spend time on the Wongery, but now that I'm writing all this out, I'm realizing that between the Strike cards, the Udemy courses, and some other behind-the-scenes work, I've actually been spending quite a bit of time working on the Wongery. I just haven't been spending time writing articles, or doing anything immediately obvious to the end user.)

So, speaking of those Udemy courses, I've been saying that one of the reasons I hadn't started on implementing the subspaces and the other MediaWiki customizations is because I wanted to get through the Udemy PHP course first so I'd have a better idea of how to do it. Well, I have finally finished the Javascript course I was taking on Udemy, and accordingly I have finally started the PHP course. But... at this point, I'm not sure I'm going to be able to get through the course before the hard laucnh, and even if I do, well, it doesn't seem likely I'll get through it in time to leave me much time to actually put it into practice.

Which doesn't mean I'm not still hoping to get those subspaces implemented before the hard launch. No, what it means is that I'm not going to wait until I ge tthrough the PHP course to do it. I have managed to teach myself some PHP, after all; I know enough to have written the code for the Wongery blog; and while I'm certainly no expert and haven't had any formal training in Javascript or PHP, maybe I know enough to muddle through what I need for now. I mean, I'll still take that PHP course to learn better ways to do things and to hopefully be able to implement more improvements later, but maybe in the meantime I can figure out how to do some of what I need to do with the limited knowledge I have.

Not that I started by trying to implement the subspaces, though. No; I started out by doing something else I'd been meaning to do for a while. I mentioned before that these blog posts are written in Wikitext, which is then parsed to HTML—but that this is done by directly calling functions in the MediaWiki code, a clunky technique prone to breaking if the MediaWiki installation is updated. What I should have done is parsed the wikitext by calling the MediaWiki APIs—and I decided to try doing that.

A bit of background for those who don't know what an API is (and who for some reason are interested in a description). I should warn readers at the outset that this is not going to be a good explanation of what an API is, because I only just learned what APIs are relatively recently myself and there's probably a lot about them I still don't know or may have wrong, so take this with a grain of salt and if you really want a good explanation look elsewhere for one by someone with a better idea of what they're talking about, but here's my no doubt highly flawed attempt at explaining it: An API is an interface on a website which allows it to return information to other websites on request. An API is accessed through an "endpoint", which is essentially a URL that is not meant to be visited directly by users, but is meant to be accessed by other sites through HTTP requests. For example, the endpoint for the MediaWiki installation on the Wongery is http://wongery.com/w/api.php—you can visit that URL directly in the browser, but all you get is an auto-generated documentation page with a lot of broken links (because they refer to pages that exist on other major MediaWiki installations like Wikipedia but that I haven't (yet?) implemented on the Wongery). But if another site makes a request to that page, with additional information as to what exactly it's requesting, the endpoint will return information based on the request.

So, anyway, I figured maybe this wouldn't be too hard. After all, lots of people use MediaWiki, and it's pretty extensively documented; I may not know much about PHP, but I'm pretty good at following instructions. The hardest part would be refactoring the site code to do the parsing through asynchronous Javascript functions rather than through PHP, since calling APIs was a Javascript thing. Or so I'd thought, but when I looked at the sample code in the Mediawiki documentation on parsing wikitext, I saw that it had it in several different languages, including PHP, and I realized I could just do it in PHP after all. Which in retrospect I should have realized before, I'd associated APIs with Javascript because I first learned about them in the Javascript course (I mean, I had heard of APIs before, but didn't really know what they were), but if it's just a matter of sending information to a remote site and getting information back, there's no particular reason why it would have to be done in any specific language. Like I said, I only recently learned about APIs, and I still don't really know a lot about them. I don't know what I'm doing. I cannot stress that enough.

Warning: The following paragraphs contain a lot of technical programming information that many readers may not be interested in. (I mean, not super technical, or I wouldn't understand it myself, since I'm not much of a programmer, but somewhat technical, anyway.) Although if you've got this far through this blog post, maybe you're interested enough to keep reading. I don't know. It's up to you.

Regardless, I looked through the documentation and thought I knew what needed to be done. Previously, in the Wongery site code any wikitext that needed to be parsed was sent to a function called MWParse (in a file called, for some reason, ParseMW.php), which itself called the MediaWiki code to do the actual work, first creating an object called a ParserFactory and then using that to create a parser, and then sending that the text to be parsed. (This may sound relatively simple, but it had taken me way too long to figure out how to do this.) Now, I removed the direct calls to the MediaWiki code, and instead inserted the code from the MediaWiki documentation on parsing wikitext, altering the request appropriately for the particular task I needed.

And, of course, it didn't work.

Through some testing and variable dumps, I figured out that the line that wasn't working was a line of code that said "

Code: Select all

$output=curl_exec( $ch );
". This was the line that was supposed to actually retrieve the results of the API call and put them in the variable

Code: Select all

$output
, and instead it was returning, well, nothing. Or, more specifically, it was returning a special value called

Code: Select all

[url=http://en.wikipedia.org/wiki/null_pointer]null[/url]
, which meant, well, nothing—or more specifically, I guess, it meant that the variable in question did not contain valid data.

The problem is that I had no idea what this line of code actually did. Well, okay, in broad terms, like I said, it seemed it was supposed to retrieve the results of the API call, but I had no idea how it was supposed to be doing it, nor the function of the lines before and after it that also referred to something called "curl". What was "curl"; what was it doing; and was there any way to get it to give me a specific error message so I could try to track down what was going wrong?

Well, it turned out cURL was the "client URL library", a library that allowed PHP code to make requests to other websites. (This is something that I presume the PHP course I'm taking on Udemy will cover at some point, but I've only just started the course, so if it is covered I haven't gotten to that point yet.) And yes, there was a function I could call,

Code: Select all

[url=https://www.php.net/curl_error]curl_error[/url]
, that would give me a specific error message. And the error message was...

Code: Select all

ERROR :SSL certificate problem: unable to get local issuer certificate
.

Well, heck. SSL is another thing that... well, I know what it is, sort of, in very broad terms, but I don't really know much about it. It's... something that allows websites to transfer information securely, and secure websites have an "SSL certificate" to guarantee their security. I know that's really vague, but like I said, I don't really know much about SSL myself except in really vague terms. Anyway, though, presumably the service where the Wongery website is hosted has the proper SSL certificates (at least, I hoped so), but I was making the code modifications on a local copy of the site on my desktop, which apparently did not. I did some websearching to see if there was a way to install an SSL certificate on a local host, but after I found a few possible leads it occurred to me that maybe I was looking at the problem from the wrong end, and maybe there was a way to just tell cURL to ignore certificate problems. And there was.

Code: Select all

curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
. And voilà; the site worked. (I made a mental note to comment out that line before I uploaded the files to the production version of the site.)

The parsing function wasn't the only part of my code that had called the MediaWiki functions directly, though. There was also the code for the "Random Article" sidebar, which called a mess of code copied from the SpecialRandompage.php file in the MediaWiki installation, and which not only called a function in the MediaWiki code called wfRandom but also queried the wiki database directly. Again, this was clunky and inelegant and prone to breakage, so if there was a way to do this through an API call too, I figured I should do it. And there was, but it was a little trickier than I had anticipated; the documentation for getting random pages through the API was clear about how to get a random page title and id, but it wasn't at all clear how I could get the content of the page without having to make a second API call. Eventually I did figure out a way to do it, but it was a little counterintuitive; rather than calling the API with the "

Code: Select all

list
" parameter, I instead had to pass the "

Code: Select all

revisions
" parameter with "

Code: Select all

generator=random
"... among others. Anyway, it took some experimentation, but I did eventually get it working, and now no longer did any of my site code outside the wiki itself have to call any functions in the MediaWiki code directly. It was all done through APIs, as it should have been in the first place. And it all seemed to be working fine.

One thing I'd been hoping was an artifact of the awkward way I was parsing the wikitext and would be fixed once I changed the code over to use APIs was the fact that the redlinks in blog posts weren't actually, well, red... as well as images being correctly sized and formatted. It took far longer than it should have for it to occur to me that this was just a CSS matter; the red links have a CSS class of "new", and all I have to do is insert a CSS rule to make this red. (The images it's going to take a little more work to figure out.)

Anyway, everything seemed to be working file on my local version of the site; would it still work when I uploaded the altered files to the remote host? So, the moment of truth was upon me. I uploaded the revised files, and... no. No, it didn't work. The main page looked fine, but when I tried to look at an individual post it was blank. What was going on?

Well, I didn't want to do my debugging on the production version of the site, with all the publicly visible variable dumps and errors that would entail, but given that everything was working fine on my local version of the site and it was only the remote version that was broken, I didn't have much choice. I just hoped it wouldn't take long. So I tried poking around and looking at some variables to see what was going on, but had trouble pinning down the problem. Enlightenment finally came when I tried posting the API request string directly into the browser address bar and got an error stating that

Code: Select all

The requested URL's length exceeds the capacity limit for this server.
Ah. Hm. Okay. So there's a limit to the length of a URL (which didn't apply on my local version of the site for whatever reason), and so passing the entire content of an article is apparently out of the question. (Which explains why there wasn't a problem with the main page, since it only shows brief excerpts of the articles.) Surely there was a way around this. Had I been a more experienced programmer, the solution no doubt would have been obvious, but as I am not, it took some digging and searching before I finally hit upon a possible answer: use a POST request instead of a GET request. (For those who don't know what the difference is between a POST request and a GET request... eh, okay, I'm not totally sure of all the differences myself, but the important difference for present purposes is that the data in a GET request is appended directly to the URL, while the POST data is sent separately.)

Unfortunately, while the MediaWiki API documentation did mention that the API could be sent POST requests, all of the sample code involved GET requests. Presumably POST requests could be sent through cURL, but never having used or indeed even heard of cURL before, I didn't know how to do it. Still, with a little more searching I managed to figure it out, and the site is back to full functionality—and this time without any direct calls to the MediaWiki code.

Now, with the exception of the redlinks, this all is, again, unfortunately, not something that has any obvious effect to the end user. The page looks the same as it always did. The main benefit, I guess, is just that it's less likely to break when I upgrade MediaWiki to new versions. Still... it's something that needed to be done, and it's something that I'm glad to have finally taken care of. And, having accomplished that... I think within the next few days I'm going to finally tackle implementing those other namespaces. It's something else I need to get done before the hard launch, and it's not going to get done if I don't do it. Even if I don't get through the PHP Udemy course before the hard launch... I may be able to do this with the limited PHP knowledge I have, and if I want to get it done before the hard launch I've got to go ahead and give it a try.

Eleven days left. Yeesh.

There are some non-Wongery-related things I'm behind on and really have to finish up by tomorrow, but after that... from then until the hard launch, pretty much every free moment I'm home, I'm going to be working on the Wongery. I'm not going to get everything done before the hard launch that I wanted to, but I'm going to get done as much as I can.
Post Reply