Category Archives: Linguistics

visualize-world-1

Access Twitter posts by country

In this ExploRation I will cover how to retrieve and filter tweets from Twitter by country. The first step will be to create and connect to the Twitter API using the twitteR and ROAuth packages. If you don’t already have one you will also need to register for a Twitter developer account and then create an application. This will give you access to an API key and secret. With these packages and credentials, we then will use streamR to download tweets. After retrieving the data, we will then proceed to filter those tweets that fall within the borders of a certain country with the sp package.

Some other packages you will need include: plyr, data.table

Here’s PDF of this exploRation and an R Script to run this code

Twitter API authenication

Before we get started on the R side, we’ll need to set up a Twitter application. First, log in to your Twitter developer account (or create one). Then you’ll follow the link ‘Manage Your Apps‘ and select ‘Create New App’.

plot of chunk create-application

Fill out the form and then scroll down and accept the Developer Agreement. Then you will be able to select your app’s name and grab your API Key and API Secret under the ‘Keys and Access Tokens’ tab. Add these credentials, the request, access, and authorize urls, and a path to a file to store the resulting authentication information. The oauth.file object is not strictly necessary, but it means that you will not need to perform the upcoming API handshake every time you want to interface with Twitter.

api.key <- my.api.key # your consumer key
api.secret <- my.api.secret # your consumer secret
request.url <- "https://api.twitter.com/oauth/request_token"
access.url <- "https://api.twitter.com/oauth/access_token"
authorize.url <- "https://api.twitter.com/oauth/authorize"
oauth.file <- "myoauth.RData"

Now that we have this key information set up, it’s on to the authentication. Load the ROAuth package (which will load its dependencies). Then we set some RCurl options to help us create the my.oauth data. OAuthFactory creates a new set of authentication parameters based on our credentials and then we perform the ‘handshake’. Running my.oauth$handshake() will access your developer account at which point you will be asked to give permissions to this application. After accepting, copy the PIN and paste it into the prompt in R. If you want to store this object, go ahead and save it to your hard disk.

library(ROAuth)
options(RCurlOptions = list(capath = system.file("CurlSSL", "cacert.pem", 
                                                 package = "RCurl"), 
                            ssl.verifypeer = TRUE))
my.oauth <- OAuthFactory$new(consumerKey = api.key, 
                             consumerSecret = api.secret, 
                             requestURL = request.url, 
                             accessURL= access.url,
                             authURL = authorize.url)
my.oauth$handshake()
save(my.oauth, file = oauth.file)

From now on we can just load the myoauth.RData data and confirm that everything is alright.

library(twitteR)
load(file = "myoauth.RData")
registerTwitterOAuth(my.oauth) # check status
## [1] TRUE

Get tweets

And on to getting some data! streamR::filterStream() gives us access to Twitter streaming data.(Use streamR::userStream to get specific user timelines.) The parameters of this function may need some explaning: here file is set to "" to redirect the stream to the console, locations is set to cover all geo-coordinates, which has the effect of only retrieving tweets with coordinate information, timeout is the time we want to hold the stream window open, and oauth is where our credentials vouch for our application.

library(streamR)
world.tweets <- filterStream(file="", # redirect to the console
                             locations = c(-180,-90,180,90), # geo-tweets
                             timeout = 60, # open stream for '60' secs
                             oauth = my.oauth) # use my credentials

The result assigned to world.tweets is a JSON string. To parse this data into a more user-friendly tabular format, we use streamR::parseTweets().

world.tweets <- parseTweets(world.tweets)

After parsing the tweets we end up with 42 pieces of meta data for each including:

text, retweet_count, favorited, truncated, id_str, in_reply_to_screen_name, source, retweeted, created_at, in_reply_to_status_id_str, in_reply_to_user_id_str, lang, listed_count, verified, location, user_id_str, description, geo_enabled, user_created_at, statuses_count, followers_count, favourites_count, protected, user_url, name, time_zone, user_lang, utc_offset, friends_count, screen_name, country_code, country, place_type, full_name, place_name, place_id, place_lat, place_lon, lat, lon, expanded_url, url

For our current purposes much of this information is not necessary. Let’s focus on only a few key columns: language, latitude, longitude, and text.

world.tweets <- world.tweets[, c("lang", "lat", "lon", "text")]

At this point you might want to write this data to disk for future access. One issue that I have found when working with text from tweets is that there are various characters that end up causing problems for reading the data back into R. In particular carriage returns end up in some of the text and misalign the columns. So, before writing the data we’ll remove any \\n+ in the tweet text field. And for whatever reason some tweets come with incorrect/ illegal lat, lon coordinates. Let’s filter them too.

# Remove extra line breaks
world.tweets$text <- gsub("\\n+", "", world.tweets$text) 
# Remove spurious coordinates
world.tweets <- subset(world.tweets, 
                       (lat <= 90  & lon <= 180  & lat >= -90 & lon >= -180)) 
# Write data to disk
write.table(x = world.tweets, file = "worldtweets.tsv", 
            sep = ",", row.names = FALSE, fileEncoding = "utf8", 
            quote = TRUE, na = "NA")
# Clean up workspace
rm(list = ls())

To read in data I use data.table::fread(). It’s a screaming fast import function for tabular data. It creates a data.table structure which is also a great alternative to the data.frame.

library(data.table)
world.tweets <- fread(input = "worldtweets.tsv", sep = ",", 
                      header = TRUE, data.table = FALSE)

Clipping geo-coordinates

Before we get to filtering the tweets by country. Let’s take a look at where the data we’ve captured originates from.

library(ggplot2)
world.map <- map_data(map = "world") # get the world map
world.map <- subset(world.map, subset = region != "Antarctica") # remove this region
p <- ggplot(world.map, aes(x = long, y = lat, group = group)) + 
  geom_path() # base plot

p + geom_point(data = world.tweets, # plot tweet origin points
               aes(x = lon, y = lat, color = lang, group = 1), 
               alpha = 1/2) + theme(legend.position = "none")

plot of chunk visualize-world

I’ve added color to the plot to indicate the languages (according to Twitter) that are in the data.

In order to filter these tweets from around the world by country we need to get spatial polygon data for the country(ies) of interest. In this example, we’ll use the US data found on the GADM website. Various formats are available, but for our purposes we’ll take the convenient route and select the .RData file. You will be presented with various level files which correspond to rough to fine-grained detail. We’ll select the Level 0 data.

# Download SpatialPolygonsDataFrame in .RData format
url <- "http://biogeo.ucdavis.edu/data/gadm2/R/USA_adm0.RData"
file <- basename(url) # gets the file's name
if (!file.exists(file)) { # If the `file` hasn't been downloaded, do so now
  download.file(url, file)
}
load(file = file) # Now load the `file` from disk

The next step is to extract our tweet coordinates and convert them into a SpatialPoints object with the same projection as the gadm data that we downloaded. This ensures that we are going to compare apples-to-apples come time to filter by country.

coords <- world.tweets[, c("lon", "lat")] # extract/ reorder `lon/lat`
library(sp)
coordinates(coords) <- c("lon", "lat") # create a SpatialPoints object
proj4string(coords) <- proj4string(gadm) # add `gadm` projection to `coords`

Clipping the data that falls outside of the spatial polygon couldn’t be easier: we just subset the original coordinates coord extracted from world.tweets by the gadm object. The result returns only those coordinates that fall within the spatial object –that is, within the USA.

system.time(usa.coords <- coords[gadm, ]) # filter tweets
##    user  system elapsed 
##  42.754   0.503  43.308
usa.tweets <- as.data.frame(usa.coords@coords) # extract coordinates

We’d like to attach these points to the relevant data from the world.tweets data.frame, and remove the rest. To do this we use join() from the plyr package. Since in both the usa.tweets and world.tweets data.frames the coordiantes are stored under the same column names (lat, lon) we don’t have to specify what columns to join by –if not specified join() will join on matching columns.

library(plyr)
usa.tweets <- join(usa.tweets, world.tweets)

The result is a data.frame usa.tweets that contains the columns lon, lat, lang, text for tweets originating from the US. Let’s visualize our work to make sure that we indeed have isolated the relevant tweets.

p + geom_point(data = usa.tweets, # plot tweet origin points
               aes(x = lon, y = lat, color = lang, group = 1), 
               alpha = 1/2) + theme(legend.position = "none")

plot of chunk usa-tweet-map

So there you go. We’ve downloaded Twitter posts via the official API, wrote that data to disk, read in the data from disk, and clipped coordinates not falling within the United States.

ACTIV-ES: a comparable, cross-dialect corpus of ‘everyday’ Spanish from Argentina, Mexico, and Spain

The first release of the ACTIV-ES Spanish dialect corpus based on TV/film transcripts is now available here: https://github.com/francojc/activ-es

It includes 3,460,172 total tokens (Argentina: 1,103,039 Mexico: 976,192 Spain: 1,380,941) and comes in running text and word list (1:5 gram) formats. Each format has both a plain text and part-of-speech tagged version.

For more information about the development and evaluation of this resource you can download our paper at the Ninth Annual Language Resources and Evaluation Conference (LREC 2014) here: https://www.academia.edu/6962707/ACTIV-ES_a_comparable_cross-dialect_corpus_of_everyday_Spanish_from_Argentina_Mexico_and_Spain
plot_country-year-genre

Our WFU Interdisciplinary Linguistics Minor
announces a special lecture by

Dr. Adam Ussishkin
University of Arizona

Assoc. Professor of Linguistics & Cognitive Science


Psycholinguistics of under-studied languages: the case of subliminal speech priming in Maltese


Early and automatic processing of linguistic stimuli is fairly well-studied for resource-heavy languages such as English (cf. work on visual masked priming by Forster and Davis 1984, Forster et al. 2003, among many others), whereas psycholinguistic studies on languages with few resources are much rarer. In this talk, I first describe the creation of the first online language corpus of Maltese, a Semitic languages for which few electronic resources exist. Next, I discuss the application of the corpus to a psycholinguistic question and investigate the psycholinguistic reality of the consonantal root, a building block of Semitic languages. This investigation is carried out using the relatively novel subliminal speech priming technique.

Thursday March 1st @ 4pm in Greene Hall 162

Differences among languages: True untranslatability

via Differences among languages: True untranslatability.

ROMAN JAKOBSON, a linguist, is credited with the notion that languages differ not so much in what they can express as what they must express. The common trope that language X has no word for Y is usually useless (it usually means language X uses several words instead of one for Y). But languages do differ significantly in what they force speakers to express, something Lera Boroditsky talks about often in support of the “linguistic relativity” hypothesis.

I was thinking of this today when on the subway, I saw a young man whose shoulder bag bore six red buttons, with “I am loved” written in white, identical except that each was in a different language. They look like this. (I later learned that this is an old campaign that began with the Helzberg Diamond company.)

What struck me was that three of the buttons identified him as female: soy amada (Spanish), io sono amata (Italian) and sou amada (Portuguese). In each, the past participle of “to love” (amar/amare) must agree with the loved thing, and the -a is a feminine ending. The young chap should have had soy amado etc. The poor button-makers had to pick one or the other, and chose feminine.

The German forced no such choice: a man or a woman can say Ich bin geliebt, as the young commuter’s pin did. And Russian doesn’t require it either, but the translation is menya lyubyat, “they love me”.  

And Russian (more than most languages) forces a bunch of other distinctions on English speakers. The average verb of motion requires you to express whether you’re going by vehicle or foot, one-direction or multidirectionally, and in the past tense, makes you include an ending for your own gender. So “I went” would, in one Russian word (khodila, say), express “I [a female] went [by foot] [and I came back].” If you don’t want to express all of that, tough luck. You have to. Jakobson himself was Russian. Perhaps his native language led him to the insight above; learning the English verb go might have had the Russian wondering “that’s it? By what means? There and back, or what? We would never put up with this in Russian.” 

When most people tell you some very unusual word “can’t be translated”, they usually mean words like these “Relationship words that aren’t translatable into English”: shockingly specific single words in other languages like mamihlapinatapei, which is apparently Yagan for “the wordless yet meaningful look shared by two people who desire to initiate something, but are both reluctant to start.” But of course mamihlapinatapei is translatable into English. It’s ”the wordless yet meaningful look shared by two people who desire to initiate something, but are both reluctant to start.” Needing several words for one isn’t the same as untranslatability. 

What really can’t be translated properly is “go” into Russian, or “loved” into Spanish, not because the English words are too specific but because they’re too vague. Those languages force you to say much more, meaning the poor Helzberg Diamond people can’t make a single button reading “I am loved” in Spanish for both men and women.  The traditional idea of “can’t be translated” has the facts exactly backwards. Who knew that the truly untranslatable words were those that say the least?

Install graphical interface for TreeTagger on Windows

Here’s a slimmed down step-by-step instruction list on how to install the TreeTagger graphical interface on a Windows machine.

1. Download the Tree-Tagger software for Windows.

2. Unzip this file into your C:\Program files\ directory. Using WinZip, make sure you have the “Use folder names” box ticked and extract all files.

3. Download the parameter file(s) that you need and extract them into the subdirectory C:\Program Files\TreeTagger\lib

4. Download and drop the graphical interface files (tagger and training programs) in the C:\Program Files\TreeTagger\bin subdirectory.

5. Then make a shortcut to the desktop by right-clicking on the tagger and/or training programs and selecting create shortcut. Drag that shortcut to the desktop.

You should now be able to launch TreeTagger from the desktop.

Install vislcg3 tools on Mac OS X

Here are the instructions to install the vislcg3 constraint grammar on a Mac.

1. Install the Xcode developer tools (App Store)

2. Install cmake and boost. I use Homebrew, but I imagine you could use MacPorts or Fink.

3. Install ICU. This takes a few steps:
A. Download the package here: http://download.icu-project.org/files/icu4c/4.8.1/icu4c-4_8_1-src.tgz (or the latest version) and decompress it:

$ gunzip -d < icu4c-4_8_1-src.tgz | tar -xvf -

Then run:

$ cd icu/source/

It's a good idea to make sure the permissions are set so run:

$ chmod +x runConfigureICU configure install-sh

B. Now run the runConfigureICU like so:

$ ./runConfigureICU MacOSX

C. You'll then make and make install, and you should be golden:

$ make
$ sudo make install

4. Now it's time to get to vislcg3.
A. Download the files from the svn repository:

$ svn co http://beta.visl.sdu.dk/svn/visl/tools/vislcg3/trunk vislcg3

Then move into the main directory:

$ cd vislcg3/

B. Do a checkup on the install:

$ ./cmake.sh

C. Run make and make install to finalize this thing.

$ make
$ sudo make install

D. Now check to see that it's in your path:

$ which vislcg3

And if you get a path to the binary, you're ready to go!