Testing Your ActionMailer Configuration

2018-08-04

Sometimes with Rails you just want to test basic email sending from the console. Here is how you can do it, without needing a Mailer or views or anything:

ActionMailer::Base.mail(
  to:      'dest@example.com',
  from:    'src@example.com', 
  subject: 'test',
  body:    'testing',
).deliver_now

Or in older versions use deliver instead of deliver_now. This will use your ActionMailer settings but doesn’t require anything else, so you can test just the SMTP config (or whatever).

Adding an if method to ActiveRecord::Relation

2018-06-30

I often myself chaining ActiveRecord scopes, like this:

q = Article
  .not_deleted
  .published_within(start_date, end_date)
q = q.with_tags(tags) if tags.any?

I wish there were a nicer way to put in the conditional scopes, without assigning to a temporary variable. What I really want is to write this:

Article
  .not_deleted
  .published_within(start_date, end_date)
  .if(tags.any?) { with_tags(tags) }

Wouldn’t that be nicer?

Many years ago I brought this up on the pdxruby mailing list, and no one seemed very interested, but I’ve always wanted it in my projects.

Here is a naïve implementation, which just for simplicity I’ll add to Object instead of ActiveRecord::Relation (where it really belongs):

class Object

  def if(condition, &block)
    condition ? instance_eval(&block) : self
  end

end

5.if(true)  { self + 7 }    # equals 12
5.if(false) { self + 7 }    # equals 5

That almost works! The problem is that inside the block, things don’t really act like a closure. If we used this implementation, our example above would give a NameError about not finding a tags method or local variable. That’s because everything inside the block is evaluated with self set to the ActiveRecord::Relation instance.

Fortunately there is a way to fix it! Before calling the block, we can save the outside self, and then we can use method_missing to delegate any failures to there.

There is a nice writeup of this “cloaking” trick if you want more details. But if you read that article, perhaps you will notice it is not thread-safe, because it temporarily adds a method to the class, and two threads could stomp on each other if they did that at the same time.

This approach was in Rails ActiveSupport for a while as Proc#bind. They even fixed the multi-threading problem (more or less . . .) by generating a different method name every time. Unfortunately that created a new problem: since define_method takes a symbol, this creates more and more symbols, which in Ruby are never garbage collected! Effectively this is a memory leak. Eventually the Rails team deprecated it.

But we can still add something similar that doesn’t leak memory and is thread-safe. We just have to protect the brief moment when we define the method and then remove it, which is simple with a Mutex. In theory taking a lock adds some overhead, and possibly contention, but we don’t expect this to be a “hot spot”, so in practice the contention should be zero and the overhead trivial.

And here is our implementation (not on Object any more):

module ActiveRecord
  class Relation

    CLOAKER_MUTEX = Mutex.new

    def if(condition, &block)
      if condition
        meth = self.class.class_eval do
          CLOAKER_MUTEX.synchronize do
            define_method :cloaker_, &block
            meth = instance_method :cloaker_
            remove_method :cloaker_
            meth
          end
        end
        with_previous_context(block.binding) { meth.bind(self).call }
      else
        self
      end
    end

    def with_previous_context(binding, &block)
      @previous_context = binding.eval('self')
      result = block.call
      @previous_context = nil
      result
    end

    def method_missing(method, *args, &block)
      super
    rescue NameError => e
      if @previous_context
        @previous_context.send(method, *args, &block)
      else
        raise e
      end
    end
  end
end

Put that in config/initializers and try it out!

An nginx HTTP-to-HTTPS Redirect Mystery, and Configuration Advice

2018-03-25

I noticed a weird thing last night on an nginx server I administer. The logs were full of lines like this:

42.232.104.114 - - [25/Mar/2018:04:50:49 +0000] "GET http://www.ioffer.com/i/new-fashion-fine-gold-bracelet-versaec-bracelet-641175733 HTTP/1.1" 301 185 "http://www.ioffer.com/" "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; Hotbar 4.1.8.0; RogueCleaner; Alexa Toolbar)"

Traffic was streaming in continuously: maybe ten or twenty requests per second.

At first I thought the server had been hacked, but really it seemed people were just sending lots of traffic and getting 301 redirects. I could reproduce the problem with a telnet session:

$ telnet 198.51.100.42 80
Trying 198.51.100.42...
Connected to example.com.
Escape character is '^]'.
GET http://www.ioffer.com/i/men-shouder-messenger-bag-briefcase-shoulder-bags-649303549 HTTP/1.1
Host: www.ioffer.com

HTTP/1.1 301 Moved Permanently
Server: nginx/1.10.1
Date: Sun, 25 Mar 2018 04:56:06 GMT
Content-Type: text/html
Content-Length: 185
Connection: keep-alive
Location: https://www.ioffer.com/i/men-shouder-messenger-bag-briefcase-shoulder-bags-649303549

<html>
<head><title>301 Moved Permanently</title></head>
<body bgcolor="white">
<center><h1>301 Moved Permanently</h1></center>
<hr><center>nginx/1.10.1</center>
</body>
</html>

In that session, I typed the first two lines after Escape character..., plus the blank line following. Normally a browser would not include a whole URL after GET, only the path, like GET /about.html HTTP/1.1, but including the whole URL is used when going through a proxy. Also it may be possible to leave off the Host header. Technically it is required for HTTP/1.1, so I added it just out of habit. I didn’t test without it.

So what was happening here? I was following some common advice to redirect HTTP to HTTPS, using configuration like this:

server {
  listen 80;
  server_name example.com *.example.com;
  return 301 https://$host$request_uri;
}

The problem is the $host evaluates to whatever the browser wants. In order of precedence, it can be (1) the host name from the request line (as in my example), (2) the Host header, or (3) what you declared as the server_name for the matching block. A safer alternative is to send people to https://$server_name$request_uri. Then everything is under your control. You can see people recommending that on the ServerFault page.

The problem is when you declare more than one server_name, or when one of them is a wildcard. The $server_name variable always evaluates to the first one. It also doesn’t expand wildcards. (How could it?) That wouldn’t work for me, because in this project admins can add new subdomains any time, and I don’t want to update nginx config files when that happens.

Eventually I solved it using a config like this:

server {
  listen 80 default_server;
  server_name example.com;
  return 301 https://example.com$request_uri;
}
server {
  listen 80;
  server_name *.example.com;
  return 301 https://$host$request_uri;
}

Notice the default_server modifier. If any traffic actually matches *.example.com, it will use the second block, but otherwise it will fall back to the first block, where there is no $host variable, but just a hardcoded redirect to my own domain. After I made this change, I immediately saw traffic getting the redirect and making a second request back to my own machine, usually getting a 404. I expect pretty soon whoever is sending this traffic will knock it off. If not, I guess it’s free traffic for me. :-)

(Technically default_server is not required since if no block is the declared default, nginx will make the first the default automatically, but being explicit seems like an improvement, especially here where it matters.)

I believe I could also use a setup like this:

server {
  listen 80 default_server;
  return 301 "https://www.youtube.com/watch?v=dQw4w9WgXcQ";
}
server {
  listen 80;
  server_name example.com *.example.com;
  return 301 https://$host$request_uri;
}

There I list all my legitimate domains in the second block, so the default only matches when people are playing games. I guess I’m too nice to do that for real though, and anyway it would make me nervous that some other misconfiguration would activate that first block more often than I intended.

I’d still like to know what the point of this abuse was. My server wasn’t acting as an open proxy exactly, because it wasn’t fulfilling these requests on behalf of the clients and passing along the response (confirmed with tcpdump -n 'tcp[tcpflags] & (tcp-syn) != 0 and src host 198.51.100.42'); it was just sending a redirect. So what was it accomplishing?

The requests were for only a handful of different domains, mostly Chinese. They came from a variety of IPs. Sometimes an IP would make requests for hours and then disappear. The referrers varied. Most were normal, like Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0), but some mentioned toolbars like the example above.

I guess if it were DNS sending them to my server there would (possibly) be a redirect loop, which I wasn’t seeing. So was my server configured as their proxy?

To learn a little more, I moved nginx over to port 81 and ran this:

mkfifo reply
netcat -kl 80 < reply | tee saved | netcat 127.0.0.1 81 > reply

(At first instead of netcat I tried ./mitmproxy --save-stream-file +http.log --listen-port 80 --mode reverse:http://127.0.0.1:81 --set keep_host_header, but it threw errors on requests with full URLs (GET http://example.com/ HTTP/1.1) because it thought it should only see those in regular mode.)

Once netcat was running I could tail -F saved in another session. I saw requests like this:

GET http://www.douban.com/ HTTP/1.1
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)
Accept: text/html, */*
Accept-Language: zh-cn; en-us
Referer: http://www.douban.com/
Host: www.douban.com
Pragma: no-cache

I also saw one of these:

CONNECT www.kuaishou.com:443 HTTP/1.0
User-Agent: Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11
Host: www.kuaishou.com:443
Content-Length: 0
Proxy-Connection: Keep-Alive
Pragma: no-cache

That is a more normal proxy request, although it seems like it was just regular scanning, because I’ve always returned a 400 to those.

Maybe the requests that were getting 301’d were just regular vulnerability scanning too? I don’t know. I seemed like something more specific than that.

The negatives for me were noisy logs and elevated bandwidth/CPU. Not a huge deal, but whatever was going on, I didn’t want to be a part of it.

. . .

By the way, as long as we’re talking about redirecting HTTP to HTTPS, I should mention HSTS, which is a way of telling browsers never to use HTTP here in the future. If you’re doing a redirect like this, it may be a good thing to add (to the HTTPS response, not the HTTP one). On the other hand it has some risks, if in the future you ever want to use HTTP again.

Counting Topologically Distinct Directed Acyclic Graphs with Marshmallows

2018-03-24

I wrote a miniature Ruby gem to topologically sort a Directed Acyclic Graph (DAG), which is useful when you have a bunch of things that depend on each other (e.g. tasks), and you want to put them in a linear order.

Writing the test suite got me thinking about how to find all the topologically distinct directed acyclic graphs with number of vertices V and edges E. My current algorithm goes like this:

  1. Start with some large number n of toothpicks and marshmallows.

  2. Call the children.

  3. Try to finish before all the marshmallows are gone.

Here is what we ended up with for all graphs of V = 4:

Topologically distinct directed acyclic graphs with four vertices

It’s not bad I think, but is a method known that works even without children? Are there any graphs I missed?

(Full disclosure: I redid this photo a couple days later with better-colored toothpicks, so now you can tell which way they point. Marshmallows may be crunchier than they appear.)

Temporal Databases Annotated Bibliography

2017-12-05

I’ve been reading about temporal databases for a few years now, so I think it’s time I share my bibliography and notes. This is presented in “narrative order”, so that you can get a sense of how the research has developed. This article somewhat overlaps a mini literature review I wrote on the Postgres hackers mailing list, but this article is more complete and in a place where I can keep it updated.

Temporal databases let you track the history of things over time: both the history of changes to the database (e.g. for auditing) and the history of the thing itself. They are not the same thing as time-series databases: whereas a time-series database has time-stamped events, a temporal database stores the history of things, typically by adding a start/end time to each row (so two timestamps, not one). With time-series the challenge is typically scale; with temporal the challenge is with complexity and correctness.

Research

Snodgrass, Richard T. Developing Time-Oriented Database Applications in SQL. 1999. The seminal work on temporal databases and still the most useful introduction I know. Covers the “combinatorial explosion” of non-temporal/state-temporal/system-temporal/bi-temporal tables, current/sequenced/non-sequenced queries, SELECT/INSERT/UPDATE/DELETE, different RDBMS vendors, etc. Very similar to the proposed TSQL2 standard that was ultimately not accepted but still influenced Teradata’s temporal support. Available as a free PDF from his website.

Hugh Darwen and C. J. Date. “An Overview and Analysis of Proposals Based on the TSQL2 Approach.” Latest draft 2005, but originally written earlier. Criticizes the TSQL2 proposal’s use of “statement modifiers”, especially their problems with composability when a view/subquery/CTE/function returns a temporal result. Available as a PDF.

Ralph Kimball and Margy Ross. The Data Warehouse Toolkit. 3rd edition, 2013. (2nd edition 2002, 1st 1996.) (My notes are based on reading the 2nd edition, but I don’t think there are major relevant changes.) This book is not about temporal databases per se, but in Chapter 4 (and scattered around elsewhere) he talks about dealing with data that changes over time (“Slowly Changing Dimensions”). His first suggestion (Type 1) is to ignore the problem and overwrite old data with new. His Type 2 approach (make a new row) is better but loses the continuity between the old row and the new. Type 3 fixes that but supports only one change, not several. This writing is evidence for the need to handle temporal data, and the contortions that result from not having a systematic approach. (These criticisms and the realization that Kimball is trying to solve the same problems as temporal databases come from Johnston’s second book below; I’m glad he made the connection between Kimball and temporal databases!) (pdf)

C. J. Date, Hugh Darwen, Nikos Lorentzos. Time and Relational Theory, Second Edition: Temporal Databases in the Relational Model and SQL. 2nd edition, 2014. (First edition published in 2002.) I read this in March 2018. If you’ve read Date & Darwen’s other works you know they are quite rigorous, often critical of SQL’s compromises vs the pure relational model (e.g. NULL and non-distinct results), and not always very practical. Nonetheless this book is full of great ideas, and I hope anyone thinking about implementing temporal features in an RDBMS product will read it. They mostly deal with state-time, not transaction-time, which is great since that’s where the field most needs attention, although they also show how to apply their ideas to transaction-time tables too.

The core idea is that a temporal table should really have a row for each second (or millisecond or day or whatever is your finest granularity of time), saying what was true at that moment. Then the primary key is just the second plus the regular primary key. This idea isn’t new, but they use it to define everything else in the book. This makes thinking about temporal queries a lot easier, so that if you ever have any hesitation about something, you can think about it in terms of one-row-per-second and it starts to be easy.

Of course no database could adopt that in practice, so they offer operators to transform rows with start/end ranges to one-row-per-second and back again. In almost every case you don’t have to use those operators, but they are a solid theoretical basis for the next step: defining interval-aware primary keys, foreign keys, and all the relational operators. Finally they show how you implement things without “materializing” the expanded version of the data.

Another idea implicit in that is that temporal concerns belong in the SQL operators, not as “statement modifiers” like in TSQL2. Really that might be the most important idea in the whole book, so I’m glad to see it being applied to Postgres in the papers by Dignös et al (below).

They also argue (vs SQL:2011) that a single interval column type is better than separate start/end columns, which I agree with. Strangely they go against all the existing research by using closed/closed intervals (e.g. [Jan2017,Dec2017]) instead of closed/open ([Jan2017,Jan2018)), without really giving much justification. They also avoid using NULL for an unbounded side (e.g. [Jan2017,NULL)), preferring a magic “end of time” date ([Jan2017,Jan3000)). I wasn’t surprised by the second decision, given the authors’ history, but the first was less explicable. In both cases their approach helps to ghetto-ize their work and impose barriers to adopting it in real SQL products.

I really appreciate how these authors have insisted that valid time (and the same with transaction time) should be a regular column, not something different. That was a big part of their complaint against TSQL2. To be honest when I first read Snodgrass the idea of using pseudo-columns seemed so suboptimal it was hard to take seriously (“Of course they don’t mean it.”), so I’m glad these authors have insisted on pointing out that approach’s shortcomings. Unfortunately in SQL:2011 valid-time is still a pseudo-column made up of two regular date/time columns. The problem is in composability: using a temporal query as a subquery, view, or function result: it all works cleanly if the interval is just another input to your interval-aware operators, but not if you need some kind of extra pseudo-column metadata. I hope implementers will take their advice seriously and not build temporal features on such a distorting idea.

Something I disagreed with was their suggestion to use tables in sixth-normal form—basically every column gets its own table—since attributes can have different lifespans. I can see how that is purer, but it seems like just too much complexity. They probably suspected the same because they always show how to do things with either that approach or tables in a more traditional third-normal form (or BCNF if you prefer). Even that is slightly distorted in order to avoid NULLs, but you can easily look past that.

Finally, I appreciated that on page 282 they mention DDL on temporal databases. Like everyone they say it’s beyond the scope of their current discussion, but it’s a penetrating insight to say, “the database catalog might itself need to be treated as a temporal database.”

SQL:2011 Draft standard. (pdf) Personally I find the standard pretty disappointing. It uses separate start/end columns instead of built-in range types, although range types offer benefits like exclusion constraints and convenient operators for things like “overlaps” that are verbose to code correctly by hand. It only mentions inner joins, not the various outer joins, semi-joins (EXISTS), anti-joins (NOT EXISTS), or aggregates. Many of its features apply only to system-time, not application-time, even though applicaion-time is the more interesting and less-available feature. (There are lots of auditing add-ons, but almost nothing for tracking the history of things.) The syntax seems too specific, lacking appropriate generality. A lot of these drawbacks seem motivated by a goal that goes back to TSQL2: to let people add temporal support to old tables without breaking any existing queries. That has always seemed to me like an unlikely possibility, and an unfortunate source of distortions. I don’t expect something for free, and I don’t mind doing work to migrate a table to a temporal format, as long as the result is good. Instead we get an (ostensible) one-time benefit for a prolonged compromise in functionality and ease-of-use.

Krishna Kulkarni and Jan-Eike Michels. “Temporal Features in SQL:2011”. SIGMOD Record, September 2012. Nice overview of the temporal features included in the SQL:2011 standard. Here is a PDF of the paper. See also these slides by Kulkani.

Peter Vanroose. “Temporal Data & Time Travel in PostgreSQL,” FOSDEM 2015. (Slides as a pdf) Lots of good detail here about SQL:2011. I’d love to see a recording of this talk if it’s available, but I haven’t found it yet.

Tom Johnston and Randall Weis. Managing Time in Relational Databases: How to Design, Update and Query Temporal Data. 2010. I haven’t read this one yet, although see just below for Johnston’s other book. This one sounds more practical and less original, although I don’t know for sure.

Tom Johnston. Bitemporal Data: Theory and Practice. 2014. I felt like I found a kindred soul when I read how he connects database design and ontology, as I’ve always thought of programming as “applied philosophy.” Databases as Aristotelian propositional logic is inseparable from the mathematical set-based theory. Johnston gives helpful distinctions between the physical rows in the table, the assertions they represent, and the things themselves. Eventually this leads to a grand vision of connecting every assertion’s bitemporal (or tritemporal) history to its speaker, somewhat like some ideas in the Semantic Web, although this doesn’t sound very practical. Like Date he seems to be landing on something like sixth-normal form, with a view-like presentation layer to bring all the attributes back together again. (Also like Date he seems to acknowledge 6NF may not be practical.) He points out how unsatisfactory Kimball’s suggestions are. He also criticizes the limitations of SQL:2011 and offers some amendments to make it more useful. Describes a (patented) idea of “episodes” to optimize certain temporal queries.

Anton Dignös, Michael H. Böhlen, and Johann Gamper. “Temporal Alignment”, SIGMOD ’12. Amazing! Shows how to define temporal versions of every relational operator by a combination of the traditional operators and just two simple transforms, which they call “align” and “split”. Gives a very readable exposition of the new theory and then describes how they patched Postgres 9.0 and benchmarked the performance. I think this solves the composability problems Date objected to in TSQL2, and unlike SQL:2011 it is general and comprehensive. The focus is on state-time, and I’m not sure how it will map onto bi-temporal, but even just having good state-time functionality would be tremendous. And the paper is only 12 easy-to-read pages! (pdf)

Anton Dignös, Michael Hanspeter Böhlen, Johann Gamper, and Christian S. Jensen. “Extending the kernal of a relational dmbs with comprehensive support for sequenced temporal queries,” ACM Transactions on Database Systems, 41(4):1-46. Continues the previous paper but adds support for scaling the inputs to aggregate groups according to how much of their time period goes into each group. Gives more benchmarks against a patched Postgres 9.5. (pdf) These researchers are now trying to contribute their work to the Postgres core project, of which I am very much in favor. :-)

Tools

Finally here are some tools for temporal support in Postgres. The sad theme is that pretty much everything gives audit support but not history support:

Postgres

The pgaudit extension looks pretty useful but I haven’t tried it yet. According to the AWS docs you can even use this on RDS.

Vlad Arkhipov’s temporal tables extension only supports system-time (auditing). Also on Github and a nice writeup by Clark Dave.

Magnus Hagander presented an approach to capturing system-time history in a separate schema at PGConf US 2015 and PGDay’15 Russia. Here are slides and video. Quite elegant if you want to ask questions like “what did we think as of time t?” If I recall correctly this is similar to one of the ideas proposed at the end Snodgrass, although I haven’t compared them carefully. Hagander points out that DDL changes against temporal databases are challenging and hopefully infrequent. This is a topic that is almost completely absent from the literature, except for a brief mention in Johnston 2014.

Ruby

Chronomodel extends the ActiveRecord ORM to record system-time history. The pg_audit_log gem works fine but like many audit solutions is rather write-only. I wouldn’t want to build any functionality that has to query its tables to reconstruct history. You could also try paper_trail or audited (formerly acts_as_audited). Of these projects only Chronomodel seems to be aware of temporal database research.

Further Research

Temporal databases are exciting because there is still so much to do. For example:

  • What should the UI look like? Even one dimension adds a lot of complexity, let alone bi-temporal. How do you present this to users? As usual an audit history is easier, and it’s possible to find existing examples, whereas a state-time history is more challenging but probably more valuable. How should we let people view and edit the history of something? How does it work if there is a “Save” button vs save-as-you-type?

  • What does “full stack” temporal support look like? Do we extend REST? What would be a nice ORM interface? Should we use triggers to hide the temporal behavior behind regular-looking SQL? Or maybe extend SQL so you can more explicitly say what you want to do?

  • SELECT support for “as of” semantics or “over time” semantics.

  • Temporal foreign keys. I’m working on this one.

  • DDL changes. For example if you want to add a NOT NULL column, what do you do with the old data? Could there be built-in support to apply constraints only to a given time span?

Postgres isn't running the archive_command on my standby

2017-11-10

This just came up on the Postgres mailing list, and I spent a long time figuring it out a few months ago, so maybe this blog post will make it a bit more Googleable.

The problem is you have a master archiving to a standby, but you want the standby to run an archive command too, either to replicate downstream to another standby, or to take pressure off the master when running base backups, e.g. with the WAL-E backup tool. For some reason the master’s achive_command runs fine, but the standby’s isn’t even getting used!

The issue is that in 9.5 and 9.6, Postgres will ignore an achive_mode=on setting if it is running in standby mode. Arguably this is kind of useful, so that you can set up the standby as close to the master as possible, and if you fail over it will immediately start running that command.

But if you really do want to do archiving from the standby, the solution is to say archive_mode=always. Once you make that change, Postgres will start running your archive_command.

Btw, if you are using Ansible, as of today the postgresql role does not respect always. If you give it something truthy it will always generate on. I’ve written a pull request to support always, but it is not yet merged.

Next: Javascript Daylight Savings Time: One Weird Trick Your Application Hates