It is nigh time to really soup up the data: we will augment some of it (tie-in other useful datasets or descriptions) and continue to reformat much of it. However, as discussed in Data Intro, some of the the data must, lamentably, be cleaned up by-hand. The following steps were done following the previous page but before the code of the present page:
Location
and Place
data to be specific enough for geocoding via Google’s API or my own knowledge. For example, “Brick Oven Pizza” was updated to “Brick oven Pizza New Haven,” “my dorm” was updated to “Stiles Dorm,” and so on.Dorm1
, Dorm5
) if the response does not correctly correspond to the approximate location data (Loc1
, Loc5
). For example, if the listed dorm information is “Off-campus” but the corresponding location is outside of New Haven, it is updated to “Not in New Haven.”Ideally the cleanup could all be done programatically; however, the only address validation offered by Qualtrics is for U.S. zipcodes, which was not granular enough for Place
data and too restrictive for student Location
data (given international travel). In addition, some important Stiles and Yale locations are too specific for geocoding services such as Google’s to know (e.g. “Stiles Library” or “Stiles Game Room”), hence the need for some manual intervention. The resultant dataframes, one for general survey responses (cleaned up Location
data) and one for meaningful/nostalgic places (cleaned up Place
data), were then saved as CSVs with the _locclean
suffix to the filename.
In the sections to follow, a nomenclature for talking about the different time periods of 2020 is developed, and then the data is finally prepped for Connectedness (Feeling Connected), Student Locations (Mapping Moose), and Meaningful/Nostalgic (Key) Places.
t <- "20201227_235052"
data.students.loc <- read.csv(paste0("data/students_", t, "_locclean.csv")) %>%
select(-c(Email, starts_with("Place"))) %>%
# New variable: we couldn't do this in the last page since the data wasn't fully cleaned.
mutate(Housing.F20=case_when(
grepl("Stiles", Dorm5) ~ "Stiles",
grepl("Vanderbilt|LW", Dorm5) ~ "Old Campus",
Dorm5 == "Off-campus" ~ "Off-Campus",
Dorm5 == "Not in New Haven" ~ "Remote",
T ~ Dorm5))
For each time period (“epoch” as dicussed on the previous page), students could list an arbitrary number of locations in which they resided over that time. Let’s expand those places into new columns (“waypoints”), each named Loce,w where e
is the epoch (time period) and w
is the numbered waypoint. For the sake of these visualizations, we assume that the waypoints are equally spaced in time. This interpolation is obviously not the case, but seems a reasonable compromise for not making respondents fill in every single precise date.
data.locations <- data.students.loc %>%
# Convert each Location column from a string (delimited by ";") to a str vector
mutate(across(c(Loc1, Loc2, Loc2.5, Loc3, Loc4, Loc5, Loc6), str_split, pattern=";")) %>%
# First lengthening: pivot Location columns into rows (clarifying that Location is an address)
pivot_longer(c(Loc1, Loc2, Loc2.5, Loc3, Loc4, Loc5, Loc6),
names_to=c("Epoch"),
names_pattern="Loc(.*)",
values_to="Address") %>%
# Second lengthening: un-nest the string vector we made, essentially pivoting
# each with containing vectors to multiple rows containing elements. The number
# of places that a respondent lists becomes the number of "Waypoints" that
# they go through in a given epoch.
unnest_longer(Address, indices_to="Waypoint") %>%
# Cleanup
mutate(Address=trimws(Address)) %>%
arrange(ID, Epoch, Waypoint)
# How many Waypoints end up in each Epoch?
mw <- data.locations %>% group_by(Epoch) %>% summarize(MaxWaypoint=max(Waypoint))
mw
# A tibble: 7 x 2
Epoch MaxWaypoint
* <chr> <int>
1 1 1
2 2 4
3 2.5 3
4 3 2
5 4 4
6 5 3
7 6 5
Note that many respondents don’t have data listed for some Waypoints, since not everyone listed the same number of locations in each epoch. (In fact, the majority didn’t move around that much within each epoch.) We just assume that their location stays constant until the next-provided location.
We now add in a couple of ways to help the visualizations make a bit more sense out of the Epoch/Waypoint nomenclature. In one we’ll convert Epoch/Waypoint to days of the year (from 1 to 365), and in another just a string combination of the Epoch/Waypoint labels.
First: approximate dates of 2020.
# Compute day of the year
epoch_days <- c("1"=13, "2"=67, "2.5"=76, "3"=83, "4"=128, "5"=244, "6"=327, "7"=366)
n_waypoints <- setNames(
mw$MaxWaypoint,
nm=mw$Epoch
)
epoch_lengths <- setNames(
epoch_days[2:8] - epoch_days[1:7],
mw$Epoch
)
waypoint_days <- data.locations %>%
select(Epoch, Waypoint) %>%
unique() %>%
mutate(days_no_waypt=epoch_days[Epoch]) %>%
mutate(waypt_duration=round(
(epoch_lengths[Epoch] / n_waypoints[Epoch]) * (as.numeric(Waypoint) - 1)
)) %>%
mutate(Day=(days_no_waypt + waypt_duration)) %>%
select(-c(days_no_waypt, waypt_duration)) %>%
arrange(Day, Waypoint)
waypoint_days <- waypoint_days %>%
mutate(EndDay=c(waypoint_days$Day[2:n()] - 1, 365))
waypoint_start <- waypoint_days %>%
select(!EndDay) %>%
pivot_wider(names_from=Waypoint, values_from=c(Day)) %>%
column_to_rownames("Epoch")
waypoint_end <- waypoint_days %>%
select(!Day) %>%
pivot_wider(names_from=Waypoint, values_from=c(EndDay)) %>%
column_to_rownames("Epoch")
# The above computation is equivalent to the following (except in ragged matrix form):
# waypoint_days <- matrix(nrow=7, ncol=max(n_waypoints), dimnames=list(c(1, 2, "2.5", 3, 4, 5, 6), NULL))
# for (e in 1:nrow(waypoint_days)) {
# for (w in 1:n_waypoints[e]) {
# duration_we <- round(epoch_lengths[e] / n_waypoints[e]) * (w - 1)
# waypoint_days[e, w] <- epoch_days[e] + duration_we
# }
# }
Now we augment the data just a little bit more to complete our assumption that people’s locations stayed constant until the next-provided location. In order to make for sane interpolations when animating people’s movements on a map, we want to clarify their location immediately prior to the next epoch. Otherwise, some people might slowly interpolate across the country over a series of many days.
get_day.V <- Vectorize(function(e, w) {
return(as.numeric(waypoint_start[e, w]))
})
get_endday.V <- Vectorize(function(id, e, w) {
s <- data.locations %>% filter(ID==id & Epoch==e)
last_waypoint_overall <- n_waypoints[e]
last_waypoint_student <- max(s$Waypoint)
last_waypoint <- ifelse(w != last_waypoint_student, w, last_waypoint_overall)
return(as.numeric(waypoint_end[e, last_waypoint]))
})
data.locations <- data.locations %>%
mutate(start=get_day.V(Epoch, Waypoint)) %>%
mutate(end=get_endday.V(ID, Epoch, Waypoint)) %>%
pivot_longer(c(start, end), values_to="Day", names_to="DayType")
# Sorry, this isn't tidy in the slightest. It involves selectively duplicating
# and then modifying rows, which is much more readable and approachable via loops.
# for s in students
# for (id in unique(data.locations$ID)) {
# s <- data.locations %>% filter(ID==id)
# # for e in epochs
# for (e in c(1, 2, "2.5", 3, 4, 5, 6)) {
# # get the students' last waypoint in epoch t
# w <- s %>% filter(Epoch==e)
# mw <- max(w$Waypoint)
# # copy that row, but set Waypoint to "end" and Day to (next epoch start - 1)
# new_row <- w %>% filter(Waypoint==mw) %>%
# mutate(Waypoint="end") %>%
# mutate(Day=epoch_days[which(names(epoch_days)==e)+1]-1)
# data.locations <- data.locations %>% rbind(new_row)
# }
# }
Finally: a simple label that pastes Epoch-Waypoint together, and a formatted Y-M-D date.
data.locations.conn <- data.locations
data.locations <- data.locations %>% select(-c(Conn1:Conn6))
Final and most important augmentation here.
# Reduce the list of many locations into unique locations, and geocode those
locations_unique <- data.locations %>%
# Get just unique addressess
select(Address) %>%
unique() %>%
# Geocode
mutate_geocode(Address) %>%
st_as_sf(coords=c("lon", "lat"), crs=4326) # Note that order should be LON, LAT
# Lookup those geocoded coordinates for each (non-unique) location in the full table
get_location_sf.V <- Vectorize(function(address) { return((locations_unique %>% filter(Address==address))$geometry) })
data.locations <- data.locations %>%
mutate(geometry=get_location_sf.V(Address)) %>%
st_as_sf()
# Save
saveRDS(data.locations, file=paste0("data/locations_", t, ".rds"))
Converts to Simple Features, which we love <3. Final product in the table below.
data.locations <- readRDS(paste0("data/locations_", t, ".rds"))
paged_table(sample_n(data.locations, n()))
# N.b., Waypoint data was not collected for Connectedness
data.connectedness <- data.students.loc %>%
select(-c(Loc1:Loc6)) %>%
pivot_longer(Conn1:Conn6,
names_to="Epoch",
names_pattern="Conn(.*)",
values_to="Connectedness")
Thankfully most of the setup work for this was done in the previous section :-).
data.connectedness <- data.connectedness %>% mutate(Day=epoch_days[Epoch])
We also want to extend the data through 2020 (the 365th day) for the sake of visualizing estimated Connectedness through the year.
paged_table(sample_n(data.connectedness, n()))
Similarly, we want to Geocode the “Place” data asking about nostalgia/meaning. However, as with the student Location data, we first needed to hand-clean the Place data in order to clarify certain locations. In addition, some places were removed for being vague or not specific enough, and some incorrect labels of Physical or Virtual were corrected.
Note, however, that there are some special places which aren’t lookup-able by Google or other geocoding services:
These occurrences are separated from the table and mapped to coordinates by manual lookup data. The remaining physical locations are geocoded as above.
data.places <- read.csv(paste0("data/places_", t, "_locclean.csv")) %>%
mutate(Name=trimws(toTitleCase(Name)))
data.places.stiles <- data.places %>%
filter(Type=="Physical" & str_detect(Name, "(?i)stiles")) # ?i is case-insensitive regex
data.places.yale <- data.places %>%
filter(Type=="Physical" & !str_detect(Name, "(?i)stiles"))
data.places.virt <- data.places %>%
filter(Type=="Virtual")
# Map stiles spots to GPS
places.yale <- read.csv("data/yale.locations.csv")
# Although this could be done with tidy functions (leaning heavily on case_when),
# I think this approach is more readable and is much less repetitive. With small
# data, the performance is no problem.
get_yale_coord <- function(name, col) {
n <- case_when(
str_detect(name, "(?i)common") ~ "Common Room",
str_detect(name, "(?i)dining") ~ "Dining Hall",
str_detect(name, "(?i)dorm") ~ "Dorm",
str_detect(name, "(?i)library") ~ "Library",
str_detect(name, "(?i)buttery") ~ "Buttery",
str_detect(name, "(?i)game") ~ "Game Room",
str_detect(name, "(?i)kitchen") ~ "Kitchen",
str_detect(name, "(?i)crescent") ~ "Crescent Courtyard",
str_detect(name, "(?i)music") ~ "Music Room",
str_detect(name, "(?i)gym") ~ "Gym",
str_detect(name, "(?i)courtyard") ~ "Courtyard",
T ~ "Stiles"
)
return(places.yale %>% filter(Name==n) %>% select(col))
}
get_yale_coord.V <- Vectorize(get_yale_coord)
data.places.stiles <- data.places.stiles %>%
mutate(lon=get_yale_coord.V(Name, "Lon")) %>%
mutate(lat=get_yale_coord.V(Name, "Lat"))
# Automocatically geocode (google) all other spots to GPS
data.places.yale <- data.places.yale %>%
mutate_geocode(Name)
# Sanity check
range(data.places.yale$lon[is.finite(data.places.yale$lon)]) # [1] -73.32775 -72.89932
range(data.places.yale$lat[is.finite(data.places.yale$lat)]) # [1] 41.11225 41.34746
(A sufficient data table was given on the previous page.)
Virtual places must remain as-is, as Science has not yet determined how to map the virtual realm onto the real world of latitude, longitude coordinates. Instead we will make a WORD CLOUD :D.