Working day vs. weekend page views

Two months ago Stack Overflow published interesting blog post on programming languages and weather people are more likely to ask question during week or on weekend. It gives some
overview of how widely languages are spread in business (week) and hobby (weekend) projects.

From their analysis we can see that for example T-SQL, PowerShell and Oracle are used
during week whereas Huskell, assembly and C during weekend.

On Wikipedia…

I was interested in checking the same using Wikipedia page views data. Of course with Wikipedia it will be a bit differently. When someone learns programming language
they don’t usually read about it on Wikipedia, but rather find tutorial or look for answers on Stack Overflow. In some cases however Wikipedia can be main source of knowledge, especially when someone looks for theoretical aspects of programming or technology.

I checked several articles from different categories: databases, programming and data science. I checked page views of English Wikipedia since September 2016. For each article I computed weekend to week ratio (average page views during weekend / average page views during working days).

Database

Database category shows something interesting. There is a difference between theoretical concepts, for example Slowly changing dimension article is more work-related than normalisation and normal form definitions. On the other end of the scale there is Blockchain that is the most ‘weekend’ page in this section.

Data science

In data science section, there is interesting observation. Deep learning itself and
various modern frameworks usually related to deep learning/neural networks are
much more weekend articles than older machine learning algorithms.

Programming

As mentioned above, reading about programming language on Wikipedia is not really
sign that the language is used in projects. More likely people will check some detail about
it when they hear that name for the first time. Nevertheless there are some interesting facts.
As in Stack Overflow report, Huskell seems to attract more people during weekends.
On the other hand, it’s has similar ratio as Java so probably this is not the best
indicator about how popular in business is given language.

Design patterns are more work-related than some theoretical articles related to
functional programming or internals (garbage collection or stack buffer overflow).

Surprisingly, Scala was seems to be more often read during working days than other
languages that I checked.

Hive – Selecting columns with regular expression

In Hive there is rather an unique feature that allows to select columns by
regular expression instead of using column by names.

It’s very useful when we need to select all columns except one. In most of the SQL databases we would have to specify all columns, but in Hive there is this feature that can save us typing.

Let’s say there is a people table with column name, age, city, country and created_at. To select all columns except created_at we can write:

set hive.support.quoted.identifiers=none;
 
select 
    `(created_at)?+.+`
from people
limit 10;

This is equivalent to:

select
    name, age, city, county
from people
limit 10;

Please note that in Hive 0.13 or later you have to set hive.support.quoted.identifier to none.
I have never seen such functionality in others SQL databases.

References

https://cwiki.apache.org/confluence/display/Hive/LanguageManual+Select

Spark SQL

This is one of the Hive-specific features that are not available in Spark SQL.

Hadoop user name

Some time ago I was looking for this option:

Environmental variable HADOOP_USER_NAME lets you specify the username that will be used when connecting to Hadoop, for example to create new HDFS files or accessing existing data.

Let’s have a look at the short example:

[root@sandbox ~]# echo "New file" | hdfs dfs -put - /tmp/file_as_root
[root@sandbox ~]# export HADOOP_USER_NAME=hdfs
[root@sandbox ~]# echo "New file" | hdfs dfs -put - /tmp/file_as_hdfs
[root@sandbox ~]# hdfs dfs -ls /tmp/file_*
-rw-r--r--   3 hdfs hdfs        154 2016-05-21 08:20 /tmp/file_as_hdfs
-rw-r--r--   3 root hdfs        154 2016-05-21 08:19 /tmp/file_as_root

So the second (file_as_hdfs) is owned by hdfs user because that was the value of HADOOP_USER_NAME variable.

Of course it works only on Hadoop cluster without Kerberos, but still it’s very useful on test environment or on VM. You can act as many users without executing sudo commands all the time.

Hive gotchas – Order By

There is this one feature in Hive that I really hate: ORDER BY col_index.

Historically order by clause accepted only column aliases, like in the simple example below:

select id, name from people order by name;

+------------+--------------+--+
| people.id  | people.name  |
+------------+--------------+--+
| 5          | Jimmy        |
| 2          | John         |
| 1          | Kate         |
| 4          | Mike         |
| 3          | Sam          |
+------------+--------------+--+

In other relational databases it is possible to give not only column alias but also column index, It much simpler to say “column 3” rather than typing whole name or alias. This option was not supported in Hive at the beginning, but community noticed that and a ticket was created.

Since Hive 0.11.0 it is possible to order the result by column index as well, however there is a gotcha here. There is a property that enables this new option: hive.groupby.orderby.position.alias must be set to ‘true’. The problem is that by default it is set to ‘false’ and in that case you can still use numbers in order by clause, but they are interpreted literally (as numbers) not as column index, which is rather strange.

So for example in the any modern Hive version where you do something like that:

select id, name from people order by 2;

+------------+--------------+--+
| people.id  | people.name  |
+------------+--------------+--+
| 1          | Kate         |
| 2          | John         |
| 3          | Sam          |
| 4          | Mike         |
| 5          | Jimmy        |
+------------+--------------+--+

As you can see by default it was interpreted as “value 2”, not the “column number 2”. After enabling the option you can change how the order by works:

set hive.groupby.orderby.position.alias=true;
select id, name from people order by 2;

+-----+--------+--+
| id  |  name  |
+-----+--------+--+
| 5   | Jimmy  |
| 2   | John   |
| 1   | Kate   |
| 4   | Mike   |
| 3   | Sam    |
+-----+--------+--+

So this time after enabling option we can use column number to sort by name. The problem is that whenever you work in Hive you have to think if the hive.groupby.orderby.position.alias was enabled in current session or not. This makes rather impractical and limits the usage of this syntactical sugar. Moreover I cannot really see any use case for using order by <value>

References

Hive Order By – https://cwiki.apache.org/confluence/display/Hive/LanguageManual+SortBy

Hive table properties

It was a surprise for me to find those two properties – very useful when dealing with large amount of files generated in other systems. I couldn’t find it on Hive docs, but you can come across these settings on forums.

skip.header.line.count tells how many lines from the file should be skipped. Useful when you read CSV files and first file contains header with column names. It works with text file (ROW FORMAT DELIMITED FIELDS TERMINATED BY…) and with CSV Serde. There is also complementary settings that allows to skip footer lines: skip.footer.line.count. The problem is however that Spark does’t recognize those properties so be careful when you plan to read the table later via Spark HiveContex.

Speaking about Hive table properties the following setting may also be very useful.

serialization.null.format is another table property which define how NULL values are encoded in the text file. Here is an example of using “null” string as NULL marker:

CREATE TABLE table_null(
     s1 STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
TBLPROPERTIES('serialization.null.format'='null');

So whenever field will contain string “null” it will be interpreted as NULL value.

Field separator escape

One more useful property that can be used when dealing with text files is delim.escape. This property allows setting custom character that will be used to escpe separator in column values:

CREATE TABLE table_escape(
    s1 STRING,
    s2 STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
TBLPROPERTIES('escape.delim'='\\');

(We use backslash for escape delimiter, but it has to be escaped as in Java)

In such case the following data file:

aaaa,bbbb
aaa\,bbb,cc

will be interpreated as:

0: jdbc:hive2://localhost:10000> select * from table_escape
0: jdbc:hive2://localhost:10000> ;
+------------------+------------------+--+
| table_escape.s1  | table_escape.s2  |
+------------------+------------------+--+
| aaaa             | bbbb             |
| aaa,bbb          | cc               |
+------------------+------------------+--+

Custom line breaks

There is also syntax that allows to split records with some character other than new line:

CREATE TABLE table_lines(
    s1 STRING,
    s2 STRING
) ROW FORMAT DELIMITED 
FIELDS TERMINATED BY ','
LINES TERMINATED BY '|';

However it is currently not supported (I tried on Hive 1.2.1):

Error: Error while compiling statement: FAILED: SemanticException 5:20 LINES TERMINATED BY only supports newline '\n' right now. Error encountered near token ''|'' (state=42000,code=40000)

Binary formats

Generally speaking, text files should be rather avoided on Hadoop, because binary columnar format usually give better performance. Nevertheless CSV or other plain text files can be found quite often as an input from external systems. In such cases it good to have different formatting options and easily start using Hive in existing ecosystem without too much hassle.