(R/R Studio, Importing, Cleaning and Manipulating Data)

In our introductory statistics course, we have chosen R and RStudio for their specialized capabilities in statistical analysis and data visualization. R, distinctively designed for statistical computation, stands out from general-purpose languages like Python, particularly in handling sophisticated statistical tasks. This specialization, coupled with its open-source nature, ensures access to a vast array of cutting-edge techniques and a rich repository of statistical packages.

RStudio, an integrated development environment for R, further enhances our analytical capabilities. It streamlines the data analysis process by combining code development, visualization, and project management into a cohesive workflow. This environment is particularly advantageous for those new to programming, facilitating an easier transition into data analysis.

Throughout this course, you will learn not just the technical aspects of R and RStudio but also how to apply them in interpreting real-world data. The skills you acquire here will be invaluable in both academic and professional settings, given R’s widespread use across various industries. This course aims to equip you with the tools and understanding necessary to conduct meaningful data analysis in today’s data-centric world.

Throughout this course, we will focus on applying the statistical methodologies learned in class using R’s capabilities. This hands-on approach aims to deepen your understanding of statistical theories and equip you for their application in practical, everyday data analysis situations.

1 R and R-Studio

We provide support for using RStudio on the Scholar computing cluster, but due to its high demand near homework deadlines, we strongly recommend installing R and RStudio locally on your personal computers. Local installations offer reliable and convenient access for your coursework, ensuring you can work on assignments without relying on the availability of external servers.

Please be aware that difficulties in accessing the Scholar cluster will not be grounds for assignment extensions. Having R and RStudio set up on your own device will help you avoid such complications, allowing for uninterrupted progress in your studies and giving you more control over your learning experience.

1.1 Downloading and Install R/R GUI Locally

1.1.1 Windows users

Download the latest R: Latest R Version.

Download R Studio Desktop Graphical User Interface: R Studio Desktop .

1.1.2 MAC users

Download both R and RStudio, R first then R Studio.

If unsure between R for Apple silicon and Intel, go to the upper left corner of your computer, click on the Apple logo, then click About This Mac.

A small screen will pop up that looks like below. If your chip is Apple M1 or Apple M2, use the download link for Apple silicon. If it is any version of Intel, use the link for Intel Processor.

For Apple silicon (M1/M2/M3) Macs: Latest R Version.

For Mac with Intel Processor: Latest R Version.

Download R Studio for macOS 11+ R Studio Desktop for macOS 12+

1.2 Accessing R/RStudio on the Scholar Computing Cluster

1.2.1 General Information for Scholar.

Short Process:
  1. Go to RStudio link:
  2. Go to RStudio link: R Studio on Scholar
  3. Log in with BoilerKey
  4. Select RStudio Server link under Interactive Apps.
  5. Launch RStudio Server.
  6. Wait in the Queue until status changes to Running.
  7. Click Connect to RStudio Server.

(Recommend clear environmental variables each time you start a new instance see below for details.)

You are granted access to the Scholar computing cluster this semester. The Scholar cluster is open to all Purdue classroom instructors from any field whose classes include assignments that could make use of the powerful computing resources, from high-end graphics rendering and weather modeling to simulating millions of molecules and exploring the dynamics of social networks. You can access scholar by the RStudio link. This link is also listed on Brightspace.

After going to the link, you will see a Boiler Key Login Page. Use your Purdue account to log in. BoilerKey is required. After logging in you will presented with a menu system select Interactive Apps pull down menu as shown below. Select the RStudio Server option. VPN is not required to connect to Scholar.

Figure 1: Scholar Cluster Gateway

Next you need to start an RStudio Server instance by requesting access to a node. Use the same settings in Figure 2 below. Make sure you use the latest version of R. We will not support versions lower than the latest iteration available on Scholar.

Figure 2: Scholar Cluster Node Requests

You will receive a spot in line to wait for a compute node. A menu with status Queued will appear stating that your node is queued in line for deployment. After the node is ready the menu will change status to Running.

Figure 2: Scholar Cluster Node Requests
Figure 3: Start RStudio Server

Remember if you want to store anything on your Purdue drives, and you are not on campus, you will need to connect to Purdue VPN. VPN now requires that you use BoilerKey to log in even though it is not stated. This is the link to get help on using the Purdue VPN Purdue VPN Link. Purdue recommends that you download Cisco AnyConnect (requires BoilerKey) and not use the VPN link that is installed on some computers. If you have further questions, you can can search for VPN on GoldAnswers or ask on the Q/A discussion forum.

Each time you log in you should clear your environmental variables to make debugging easier. See Figure 6 below.

Figure 6: Clear Environment

After you have finished working on your assignment and are ready to exit scholar and RStudio I recommend you use the logout button as demonstrated below.

  1. Press the orange circle at the upper right of your screen as shown in the diagram below.
    Figure 4: Exit
  2. After pressing the circle, you will briefly see a popup that is saying that your R session is ending. You can close that window.
  3. When you are no longer needing access to RStudio please shutdown your compute node by clicking delete.
    Figure 5: Shutdown RStudio Server

1.2.2 Downloading Data

Data Location: All the necessary files for the computer assignments will either be on Scholar and on Brightspace.The default location for all computer assignment files stored on Scholar is “/depot/statclass/data/stat35000/2025SUMMER”.

1.3 General Information for R

  1. R uses functions to perform operations. To run a function with two inputs called funcname, we type funcname(input1, input2).
  2. Online help is available in R. Typing “? command” at the command prompt will display the help file in a window. Google can also be helpful for clarifying commands, or you can use the “help(command)” function. For example; ? mean, will display information about the mean function. Alternatively, you may type help(mean).
  3. The ‘#’ symbol indicates a comment in R, i.e. everything to the right of the ‘#’ sign will NOT be run as code.
  4. Dataset names should be one word, starting with a letter. Spaces can be indicated using an underscore “_” or period “.”, and capitalization is important. Not all special characters can be used.
  5. When working with moderate to large datasets avoid printing the dataset on the console. Instead, use the “View” command.
  6. Check to see if there is any red on your console. This is the color that R uses for warning and error messages.
  7. R uses the ASCII left arrow “<-” for assignment emphasizing that the newly defined variable is assigned the output of the expression to the right of the arrow.
  8. A collection of variables in R is called a data frame, similar to a table or spreadsheet.
  9. R is a flexible language used extensively in statistics. New sets of functions in R are called packages, which can be installed and accessed using the library() function.
  10. If a package is not installed on your computer, you can use the install.packages(“PackageName”) function. Alternatively, you can go to the Packages tab and then click on “install” in the bottom right hand of the RStudio window.
    Figure 6: Packages
  11. When working with R it is highly recommended to create a new variable whenever you modify an existing one. This practice serves two purposes: (1) it safeguards the original data in case of errors, as you won’t overwrite it, and (2) it enables you to compare the new variable with the original, aiding in analysis. Therefore, it is crucial to avoid reusing variable or data frame names. Your modified variables or tables should always have distinct names from any other variables or tables in your current session.

1.4 Specifics for RStudio and use on Scholar:

Once you open Rstudio in Scholar, you will see a window like what is provided in the figure below:

Figure 7: RStudio

1.4.1 Create R Script File

I strongly recommend that you create a R script for each computer assignment. To accomplish this, go to File which is circled in red in Fig. 1 then File → New File → R Script. An “Untitled1” file will be created. This is the place where you can edit your code without running it.

You can use the small “save” button to save and change the name of “Untitled1” File.

Figure 8: Save Scripts

1.4.2 Create R Markdown (Cheat Sheet)

I also recommend you use R markdown for your own studies. R Markdown is a document format that combines plain text, code, and rich formatting elements to create dynamic reports, presentations, and documents. It is an extension of the Markdown syntax, which allows for easy creation of structured documents using simple and readable text. This document was written in R Markdown. For a tutorial on using R Markdown read the over the tutorial at RMarkdown Tutorial.

1.4.3 Execution:

Move your cursor to the beginning of the first line of code that you wish to run. Then click the Run button circled below. The cursor will automatically move down one line. If you want to run multiple lines, you may repeat this process. Alternatively, you may select all of the lines that you wish to run and then click on the Run button. You can also use Control/Command + “return” to run the code. I strongly recommend that you do not re-run all of your code from the beginning when you fix a mistake. Only start re-running the code at the point that the mistake was made.

Figure 8: Execute Code

Please don’t write and execute your code directly in the console window because it is difficult for you to determine the bugs and change what you have written. It also makes it hard to submit your code in your homework.

1.4.4 Uploading Files:

Before you can import the data into R, the data file needs to be on the cluster. Therefore, if the data is not already there, you will need upload it to a directory on Scholar. To accomplish this, use ‘Upload’ shown below. The major datasets used in this course will already be available on the cluster drive and this step is unnecessary in such cases.

Figure 9: Uploading Files

After you press upload, you will get a dialog box:

Figure 10: Uploading Files Dialog

From this box, you can upload files that are saved on your W drive or any other remote location that your computer has access to.

1.4.5 Importing Data

There are two ways to import the data or allow R to manipulate data; the RStudio interface or via command line. I recommend you use the command line. Note that when you use the Import Dataset method it will create code that is run in the console. You can copy this code into your script so it is automatically loaded when you run the script.

In the right-hand side of RStudio you will find the Environment tab.

Figure 11: Importing Data Method 1

Click Import Dataset → From Text (base)

The following screen will be visible.

Figure 12: Importing Data

Browse or type in the location of the file. When you are loading in the original data file, you need to navigate to the read only directory where it is stored. For the rest of the semester, you just need to set your working directory to where you have stored your data. The directory that our data is in, and the name of the data set changes every semester so is not included in the tutorial. It is available on Brightspace and in the computer assignments. The directory will be a subdirectory of /depot/statclass/data/stat35000/. If you can type in the full name without typos, then type it in the box by ‘File name’ in the figure above. For some reason, I have problems typing in the full name. Therefore, I will show you how to navigate to the correct directory. To start this, click on the three dots circled below.

Figure 13: Importing Data

You will see the following dialog box where you can type in /depot.

Figure 14: Importing Data

After you press OK, you will see the following (or something like this):

Figure 15: Importing Data Then you can navigate to the correct directory by double clicking on each directory in the path. Continue double clicking until you can double click on the correct data file.

Once the file is loaded either by directly typing in the filename or by navigating through the folders, you will see something similar to the following:

Figure 16: Loading Data

Note that the data used in the Computer Assignment is different from helicon_m. This data is used for tutotiral purposes only. You should run all tutorials yourself before working on the computer assignments.

After your file is loaded, make sure to check to be sure that Heading is Yes and the Separator is Comma. Check the Data Frame to make sure everything looks good. You can also use the Name to change the name R gives to the dataset. In this case, I am changing it to ‘helicon’ from helicon_m’ because it is shorter.

Make sure you use the correct settings and your data loads correctly before proceeding with any assignment.

If data is incorrectly loaded your variable names may be general, for example; V1, V2, … Also, make sure the variable names do not have an X in front of them, for example; X.Length. If this happens you may need to change the quote setting. Always be sure that the variable names are correctly loaded.

After the data looks like it is in the correct format, click Import.

1.4.6 Method 2 (Preferred Method)

You may also import the data into R directly via the command line. This is most useful when the data is in your default directory but you can use it if you know what directory the file is in.

TableName <- read.csv(“filelocation/_name_of_the_file.csv”, header = TRUE)

The code below properly loads the helicon_m.csv file into R using the Helicon data stored on the Scholar cluster.

helicon <- read.csv('/depot/statclass/data/stat35000/2025SUMMER/helicon_m.csv', header = TRUE)

1.4.7 General Comments

Because we are using a cluster and it saves information, if you see your data set in the Global Environment, you can re-use it without re-importing it in. However, if you have modified that data set from the original, I strongly suggest that you either re-import the data set or change the name when it is modified.

You can use the “View()” command to look over your data. That option is selected by default when you import the data using the menu based system in RStudio. Note that this command starts with a capital ‘V’. For example, “View(helicon)”

View(helicon)

This can also be accomplished by clicking on the ‘View icon’ see Figure 16 below. Please note all the variables will be displayed in the “Environment” window.

Figure 17: Loading Data

1.4.8 Autocomplete

In RStudio, you can enjoy the “autocomplete” feature by hitting “Tab”. It also will occur when you have typed ‘enough’ of the word. The prompt window will be like below:

Figure 16: Auto Complete

1.5 Importing Datasets, Cleaning, Manipulating, Printing, and Writing Data.

1.5.1 Methods for data Loading:

read.table(): Reads data from a delimited text file and returns a data frame.

Simple Wrappers of read.table() are given below:

  • read.csv(): Reads data from a comma-separated values (CSV) file and returns a data table.
  • read.delim(): Reads data from a tab-separated values (TSV) text file and returns a table.
  • readr: The readr package provides efficient functions (up to 10x faster) for reading various data formats. Useful when the data is large.
    Some commonly used functions in the readr package include:
    • read_csv(): Reads data from a comma-separated values (CSV).
    • read_tsv(): Reads data from a tab-separated values (TSV).
    • read_delim(): Reads data from a delimited text file with a custom separator.
  • read_xlsx() (from the readxl package): Reads data from an Excel file (.xlsx format) and returns a table.
  • haven package functions: The haven package is used for reading and writing data from other statistical package data formats i.e., SAS, SPSS, and Stata formats.
    Some commonly used functions in the haven package include:
    • read_sas(): Reads data from a SAS data file.
    • read_spss(): Reads data from an SPSS data file.
    • read_dta(): Reads data from a Stata data file.

1.5.2 Methods for writing data:

write.table(): Writes data to a delimited text file with a custom separator.

Simple Wrappers of write.table() are given below:

  • write.csv(): Writes data to a comma-separated values (CSV) file.
  • write.delim(): Reads data from a tab-separated values (TSV) text file and returns a table.
  • readr: The readr package also provides efficient functions (up to 10x faster) for writing various data formats. Useful when the data is large.
    Some commonly used functions in the readr package include:
    • write_csv(): Writes data to a comma-separated values (CSV).
    • write_tsv(): Writes data to a tab-separated values (TSV).
    • write_delim(): Writes data to a delimited text file with a custom separator.
  • write_xlsx() (from the writexl package): Writes data to an Excel file (.xlsx format).
  • haven package functions: The haven package is used for reading and writing data from other statistical package data formats i.e., SAS, SPSS, and Stata formats.
    Some commonly used functions in the haven package include:
    • write_sas(): Writes data to a SAS data file.
    • write_sav(): Writes data to a SPSS data file.
    • write_dta(): Writes data to a Stata data file.

Please remember where your files are stored so that you can access them when you need them.

1.5.3 Help loading Data

Below is the help output for the read.table function. You can obtain the same output by typing “? read.table” in the R console.

R: Data Input
read.tableR Documentation

Data Input

Description

Reads a file in table format and creates a data frame from it, with cases corresponding to lines and variables to fields in the file.

Usage

read.table(file, header = FALSE, sep = "", quote = "\"'",
           dec = ".", numerals = c("allow.loss", "warn.loss", "no.loss"),
           row.names, col.names, as.is = !stringsAsFactors,
           na.strings = "NA", colClasses = NA, nrows = -1,
           skip = 0, check.names = TRUE, fill = !blank.lines.skip,
           strip.white = FALSE, blank.lines.skip = TRUE,
           comment.char = "#",
           allowEscapes = FALSE, flush = FALSE,
           stringsAsFactors = FALSE,
           fileEncoding = "", encoding = "unknown", text, skipNul = FALSE)

read.csv(file, header = TRUE, sep = ",", quote = "\"",
         dec = ".", fill = TRUE, comment.char = "", ...)

read.csv2(file, header = TRUE, sep = ";", quote = "\"",
          dec = ",", fill = TRUE, comment.char = "", ...)

read.delim(file, header = TRUE, sep = "\t", quote = "\"",
           dec = ".", fill = TRUE, comment.char = "", ...)

read.delim2(file, header = TRUE, sep = "\t", quote = "\"",
            dec = ",", fill = TRUE, comment.char = "", ...)

Arguments

file

the name of the file which the data are to be read from. Each row of the table appears as one line of the file. If it does not contain an absolute path, the file name is relative to the current working directory, getwd(). Tilde-expansion is performed where supported. This can be a compressed file (see file).

Alternatively, file can be a readable text-mode connection (which will be opened for reading if necessary, and if so closed (and hence destroyed) at the end of the function call). (If stdin() is used, the prompts for lines may be somewhat confusing. Terminate input with a blank line or an EOF signal, Ctrl-D on Unix and Ctrl-Z on Windows. Any pushback on stdin() will be cleared before return.)

file can also be a complete URL. (For the supported URL schemes, see the ‘URLs’ section of the help for url.)

header

a logical value indicating whether the file contains the names of the variables as its first line. If missing, the value is determined from the file format: header is set to TRUE if and only if the first row contains one fewer field than the number of columns.

sep

the field separator character. Values on each line of the file are separated by this character. If sep = "" (the default for read.table) the separator is ‘white space’, that is one or more spaces, tabs, newlines or carriage returns.

quote

the set of quoting characters. To disable quoting altogether, use quote = "". See scan for the behaviour on quotes embedded in quotes. Quoting is only considered for columns read as character, which is all of them unless colClasses is specified.

dec

the character used in the file for decimal points.

numerals

string indicating how to convert numbers whose conversion to double precision would lose accuracy, see type.convert. Can be abbreviated. (Applies also to complex-number inputs.)

row.names

a vector of row names. This can be a vector giving the actual row names, or a single number giving the column of the table which contains the row names, or character string giving the name of the table column containing the row names.

If there is a header and the first row contains one fewer field than the number of columns, the first column in the input is used for the row names. Otherwise if row.names is missing, the rows are numbered.

Using row.names = NULL forces row numbering. Missing or NULL row.names generate row names that are considered to be ‘automatic’ (and not preserved by as.matrix).

col.names

a vector of optional names for the variables. The default is to use "V" followed by the column number.

as.is

controls conversion of character variables (insofar as they are not converted to logical, numeric or complex) to factors, if not otherwise specified by colClasses. Its value is either a vector of logicals (values are recycled if necessary), or a vector of numeric or character indices which specify which columns should not be converted to factors.

Note: to suppress all conversions including those of numeric columns, set colClasses = "character".

Note that as.is is specified per column (not per variable) and so includes the column of row names (if any) and any columns to be skipped.

na.strings

a character vector of strings which are to be interpreted as NA values. Blank fields are also considered to be missing values in logical, integer, numeric and complex fields. Note that the test happens after white space is stripped from the input, so na.strings values may need their own white space stripped in advance.

colClasses

character. A vector of classes to be assumed for the columns. If unnamed, recycled as necessary. If named, names are matched with unspecified values being taken to be NA.

Possible values are NA (the default, when type.convert is used), "NULL" (when the column is skipped), one of the atomic vector classes (logical, integer, numeric, complex, character, raw), or "factor", "Date" or "POSIXct". Otherwise there needs to be an as method (from package methods) for conversion from "character" to the specified formal class.

Note that colClasses is specified per column (not per variable) and so includes the column of row names (if any).

nrows

integer: the maximum number of rows to read in. Negative and other invalid values are ignored.

skip

integer: the number of lines of the data file to skip before beginning to read data.

check.names

logical. If TRUE then the names of the variables in the data frame are checked to ensure that they are syntactically valid variable names. If necessary they are adjusted (by make.names) so that they are, and also to ensure that there are no duplicates.

fill

logical. If TRUE then in case the rows have unequal length, blank fields are implicitly added. See ‘Details’.

strip.white

logical. Used only when sep has been specified, and allows the stripping of leading and trailing white space from unquoted character fields (numeric fields are always stripped). See scan for further details (including the exact meaning of ‘white space’), remembering that the columns may include the row names.

blank.lines.skip

logical: if TRUE blank lines in the input are ignored.

comment.char

character: a character vector of length one containing a single character or an empty string. Use "" to turn off the interpretation of comments altogether.

allowEscapes

logical. Should C-style escapes such as ‘⁠\n⁠’ be processed or read verbatim (the default)? Note that if not within quotes these could be interpreted as a delimiter (but not as a comment character). For more details see scan.

flush

logical: if TRUE, scan will flush to the end of the line after reading the last of the fields requested. This allows putting comments after the last field.

stringsAsFactors

logical: should character vectors be converted to factors? Note that this is overridden by as.is and colClasses, both of which allow finer control.

fileEncoding

character string: if non-empty declares the encoding used on a file (not a connection) so the character data can be re-encoded. See the ‘Encoding’ section of the help for file, the ‘R Data Import/Export’ manual and ‘Note’.

encoding

encoding to be assumed for input strings. It is used to mark character strings as known to be in Latin-1 or UTF-8 (see Encoding): it is not used to re-encode the input, but allows R to handle encoded strings in their native encoding (if one of those two). See ‘Value’ and ‘Note’.

text

character string: if file is not supplied and this is, then data are read from the value of text via a text connection. Notice that a literal string can be used to include (small) data sets within R code.

skipNul

logical: should nuls be skipped?

...

Further arguments to be passed to read.table.

Details

This function is the principal means of reading tabular data into R.

Unless colClasses is specified, all columns are read as character columns and then converted using type.convert to logical, integer, numeric, complex or (depending on as.is) factor as appropriate. Quotes are (by default) interpreted in all fields, so a column of values like "42" will result in an integer column.

A field or line is ‘blank’ if it contains nothing (except whitespace if no separator is specified) before a comment character or the end of the field or line.

If row.names is not specified and the header line has one less entry than the number of columns, the first column is taken to be the row names. This allows data frames to be read in from the format in which they are printed. If row.names is specified and does not refer to the first column, that column is discarded from such files.

The number of data columns is determined by looking at the first five lines of input (or the whole input if it has less than five lines), or from the length of col.names if it is specified and is longer. This could conceivably be wrong if fill or blank.lines.skip are true, so specify col.names if necessary (as in the ‘Examples’).

read.csv and read.csv2 are identical to read.table except for the defaults. They are intended for reading ‘comma separated value’ files (‘.csv’) or (read.csv2) the variant used in countries that use a comma as decimal point and a semicolon as field separator. Similarly, read.delim and read.delim2 are for reading delimited files, defaulting to the TAB character for the delimiter. Notice that header = TRUE and fill = TRUE in these variants, and that the comment character is disabled.

The rest of the line after a comment character is skipped; quotes are not processed in comments. Complete comment lines are allowed provided blank.lines.skip = TRUE; however, comment lines prior to the header must have the comment character in the first non-blank column.

Quoted fields with embedded newlines are supported except after a comment character. Embedded nuls are unsupported: skipping them (with skipNul = TRUE) may work.

Value

A data frame (data.frame) containing a representation of the data in the file.

Empty input is an error unless col.names is specified, when a 0-row data frame is returned: similarly giving just a header line if header = TRUE results in a 0-row data frame. Note that in either case the columns will be logical unless colClasses was supplied.

Character strings in the result (including factor levels) will have a declared encoding if encoding is "latin1" or "UTF-8".

CSV files

See the help on write.csv for the various conventions for .csv files. The commonest form of CSV file with row names needs to be read with read.csv(..., row.names = 1) to use the names in the first column of the file as row names.

Memory usage

These functions can use a surprising amount of memory when reading large files. There is extensive discussion in the ‘R Data Import/Export’ manual, supplementing the notes here.

Less memory will be used if colClasses is specified as one of the six atomic vector classes. This can be particularly so when reading a column that takes many distinct numeric values, as storing each distinct value as a character string can take up to 14 times as much memory as storing it as an integer.

Using nrows, even as a mild over-estimate, will help memory usage.

Using comment.char = "" will be appreciably faster than the read.table default.

read.table is not the right tool for reading large matrices, especially those with many columns: it is designed to read data frames which may have columns of very different classes. Use scan instead for matrices.

Note

The columns referred to in as.is and colClasses include the column of row names (if any).

There are two approaches for reading input that is not in the local encoding. If the input is known to be UTF-8 or Latin1, use the encoding argument to declare that. If the input is in some other encoding, then it may be translated on input. The fileEncoding argument achieves this by setting up a connection to do the re-encoding into the current locale. Note that on Windows or other systems not running in a UTF-8 locale, this may not be possible.

References

Chambers, J. M. (1992) Data for models. Chapter 3 of Statistical Models in S eds J. M. Chambers and T. J. Hastie, Wadsworth & Brooks/Cole.

See Also

The ‘R Data Import/Export’ manual.

scan, type.convert, read.fwf for reading fixed width formatted input; write.table; data.frame.

count.fields can be useful to determine problems with reading files which result in reports of incorrect record lengths (see the ‘Examples’ below).

https://tools.ietf.org/html/rfc4180 for the IANA definition of CSV files (which requires comma as separator and CRLF line endings).

Examples

## using count.fields to handle unknown maximum number of fields
## when fill = TRUE
test1 <- c(1:5, "6,7", "8,9,10")
tf <- tempfile()
writeLines(test1, tf)

read.csv(tf, fill = TRUE) # 1 column
ncol <- max(count.fields(tf, sep = ","))
read.csv(tf, fill = TRUE, header = FALSE,
         col.names = paste0("V", seq_len(ncol)))
unlink(tf)

## "Inline" data set, using text=
## Notice that leading and trailing empty lines are auto-trimmed

read.table(header = TRUE, text = "
a b
1 2
3 4
")

2 Hummingbirds and flowers Example

Hummingbirds and flowers. (Dataset: helicon_m.csv) Different varieties of the tropical flower Heliconia are fertilized by different species of hummingbirds. Over time, the lengths of the flowers and the form of the hummingbirds’ beaks have evolved to match each other. Here are data on the lengths in millimeters of three varieties of these flowers on the island of Dominica:

Figure 18: Hummingbirds and flowers

2.1 Load the helicon_m.csv data

helicon <- read.csv('/depot/statclass/data/stat35000/2025SUMMER/helicon_m.csv', header = TRUE)

It is important to look at your data after you import it to ensure that there are no problems.

If you have a small data set, you may print out the entire contexts of the file to the console by typing in the table name. NEVER USE THIS COMMAND WHEN USING THE Computer Assignment Dataset; it is too big.

For larger files, you can use the View() command, “head(tablename)”, “tail(tablename)”, or list the specific lines that you want to print out. head() and tail() will print out the top and bottom rows of all variables in the dataset. Unless explicitly stated, never submit your output in the assignments.

Figure 19: Hummingbirds and flowers

I have highlighted the first five data points from the results from the View() command.

If you want to copy tables (or parts of tables) or graphs from the R output, I suggest that you use “Snip & Sketch.” You can also use this tools to highlight your answer. This is the procedure that I used to create the above table. If you are just copying information from the console the “Snipping Tool” is not required.

Lastly, it might be easier if you only print specific rows and variables. The following command prints rows 2, 20, and 50 of the variables “Length” and “Variety” in that order. Be sure to include at least two variable names or the output is very confusing.

helicon[c(2,20,50),c("Length","Variety")]

It is required that you include the “c” for each list and enclose all names in double quotes.

2.2 Exploring the Data

The following are common operations for exploring your data. Please take time to explore what each function call is doing. Remember you can use the built-in help function to explore each function in detail.

head(helicon)
tail(helicon)
helicon[20:30, ]
names(helicon)
## [1] "Variety"     "Length"      "bract_count"
dim(helicon)
## [1] 57  3
class(helicon)
## [1] "data.frame"
class(helicon[1, ])
## [1] "data.frame"
class(helicon[ , 1])
## [1] "character"
class(helicon[ , 2])
## [1] "numeric"
summary(helicon[ , 2])
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max.    NA's 
##   34.57   37.17   39.40   41.07   46.59   50.26       3

2.3 Cleaning and Saving Datasets

Since many datasets have missing values, it is important to process them before beginning analyses. You must always explore the missing patterns in your data the first time you load it.

The R function is.na() will create a dataframe of TRUE and FALSE values. Which can be used to determine the location and the number of missing values in the data. To determine the number of total values/elements that are missing we can sum over the values in the TRUE/FALSE dataframe as TRUE is treated as 1 and FALSE as 0.

helicon_NA <- is.na(helicon)
total_NA <- sum(helicon_NA)
total_NA
## [1] 48

The above code determines how many cells contain missing values.

If instead we want to know how many observations/rows have missing values we can use the complete.cases() function. The R function complete.cases() returns us a list of True/False values indicating which rows have no missing values. If we want to know if a row has a missing value we can negative these results.

helicon_NA <- is.na(helicon)
helicon_missing_obs <- !complete.cases(helicon) 
total_obs_NA <- sum(helicon_missing_obs)
total_obs_NA
## [1] 45

We can see that there are quite a lot of missing values. However, it is important to explore further. We can determine how many missing values are in each column. The colSum function will allow us to determine the number of missing values associated with each variable.

column_NA <- colSums(helicon_NA)
column_NA
##     Variety      Length bract_count 
##           0           3          45

The variable labeled bract_count has more than 70% missing values as the helicon data only contains 57 values. Since, the bract_count variable does not contain much data we will drop this variable from the data.

helicon_partial_cleaned <- helicon[,c("Variety", "Length")] 

Another way to achieve the same results is to use the subset function and the select attribute. Use help(subset) to learn more.

helicon_partial_cleaned <- subset(helicon, select = -bract_count)

Now that the data is partially cleaned we should determine how many values are still missing. From the analysis on the columns we know it should be 3 values. But, let us confirm using the is.na function again.

sum(is.na(helicon_partial_cleaned))
## [1] 3

The following command will remove the three rows that have missing values.

helicon_cleaned <- helicon_partial_cleaned[complete.cases(helicon_partial_cleaned),]

If the row names of your dataset is not unique and the positional information is not important it is standard practice to reset the row names so they are in correct numerical order. If this is not done we may see skips between numbers at the positions where rows were removed during cleaning. We have two options for resetting the names. The first option is to set the row.names value to NULL as seen below.

row.names(helicon_cleaned) <- NULL

Alternatively, you can save the cleaned data without row names and then reload it into ‘R’. This approach is not recommended for large datasets. See the next section for information regarding saving data.

We will view the remaining cleaned dataset. When you are viewing, be sure that you include the correct (new) dataset name.

knitr::kable(helicon_cleaned, format = "html")
Variety Length
bihai 47.12
bihai 46.75
bihai 46.81
bihai 47.12
bihai 46.67
bihai 47.43
bihai 46.44
bihai 46.64
bihai 48.07
bihai 48.34
bihai 48.15
bihai 50.26
bihai 50.12
bihai 46.34
bihai 46.94
bihai 48.36
red 41.90
red 42.01
red 41.93
red 43.09
red 41.47
red 41.69
red 39.78
red 40.57
red 39.63
red 42.18
red 40.66
red 37.87
red 39.16
red 37.40
red 38.20
red 38.07
red 38.10
red 37.97
red 38.79
red 38.23
red 38.87
red 37.78
red 38.01
yellow 36.78
yellow 37.02
yellow 36.52
yellow 36.11
yellow 36.03
yellow 35.45
yellow 38.13
yellow 37.10
yellow 35.17
yellow 36.82
yellow 36.66
yellow 35.68
yellow 36.03
yellow 34.57
yellow 34.63

2.4 Saving your Cleaned Data

Once you are satisfied with the cleaned dataset you should save it for later access. As mentioned above we have several methods that can assist with this process and we can save in several different formats. We will stick with the comma delimited (csv) format. If the row names of the dataframe are not unique identifiers as is the case for the helicon dataset we typically do not save them which allows them to be reset for future loading. Below we save the cleaned data into the Data folder. Make sure the Data folder exists as a subfolder in your working directory if you are running R on the console.

write.csv(helicon_cleaned, file = "Data/helicon_cleaned.csv", row.names = FALSE)

2.5 Manipulating Data

For readability, you might want to change a shortened name or abbreviation to the full version. This is done by the following commands:

First I created a new table since I am making a modification

helicon_new <- helicon_cleaned

Initialize a new variable by copying old values into the new variable (the ‘as.character’ is not always necessary).

helicon_new$NewVariety <- as.character(helicon_new$Variety)
# Change names
helicon_new$NewVariety[helicon_new$Variety =="red"] <-"Caribaea_Red"
helicon_new$NewVariety[helicon_new$Variety =="yellow"] <-"Caribaea_yellow"

You can indicate a range of rows by using a colon (:). You have to include the comma (,) after the numbers if you want to include all of the variables in the original order

helicon_new[c(36:43),] 

In addition, you might want to create a new variable based on mathematical operations from old variable(s). You can use the sample code below to convert the lengths of the beaks from millimeters to inches. The conversion factor is 1/25.4. The other common mathematical operations are + (addition), - (subtraction, be sure that this is a hyphen – next to 0 on the standard keyboard and not a special character), * (multiplication), ^ (a^x is ax0), or exp (exponential, exp(x) is ex – anti log, where e is Euler’s constant).

helicon_new$length_inches <- helicon_new$Length/25.4
tail(helicon_new)           

You will see a new column in the dataset called length_inches:

Again, whenever you modify an existing variable and/or table, I strongly recommend that you create a new variable since (1) in case there’s a mistake, you won’t overwrite the original data, and (2) you can compare the new with the original. This means that you should never re-use variable names. That is, your modified variable should always have a distinct name from any other variable in your dataset.