Chapters9and10.pdf

509

Big Data, Cloud Computing,
and Location Analytics:
Concepts and Tools

LEARNING OBJECTIVES

■■ Learn what Big Data is and how it is changing the
world of analytics

■■ Understand the motivation for and business drivers
of Big Data analytics

■■ Become familiar with the wide range of enabling
technologies for Big Data analytics

■■ Learn about Hadoop, MapReduce, and NoSQL as
they relate to Big Data analytics

■■ Compare and contrast the complementary
uses of data warehousing and Big Data
technologies

■■ Become familiar with in-memory analytics and
Spark applications

■■ Become familiar with select Big Data platforms and
services

■■ Understand the need for and appreciate the
capabilities of stream analytics

■■ Learn about the applications of stream analytics
■■ Describe the current and future use of cloud
computing in business analytics

■■ Describe how geospatial and location-based
analytics are assisting organizations

B ig Data, which means many things to many people, is not a new technological fad. It has become a business priority that has the potential to profoundly change the competitive landscape in today’s globally integrated economy. In addition
to providing innovative solutions to enduring business challenges, Big Data and analyt-
ics instigate new ways to transform processes, organizations, entire industries, and even
society altogether. Yet extensive media coverage makes it hard to distinguish hype from
reality. This chapter aims to provide a comprehensive coverage of Big Data, its enabling
technologies, and related analytics concepts to help understand the capabilities and limi-
tations of this emerging technology. The chapter starts with a definition and related con-
cepts of Big Data followed by the technical details of the enabling technologies, including
Hadoop, MapReduce, and NoSQL. We provide a comparative analysis between data
warehousing and Big Data analytics. The last part of the chapter is dedicated to stream

9
C H A P T E R

510 Part III • Prescriptive Analytics and Big Data

analytics, which is one of the most promising value propositions of Big Data analytics.
This chapter contains the following sections:

9.1 Opening Vignette: Analyzing Customer Churn in a Telecom Company Using
Big Data Methods 510

9.2 Definition of Big Data 513
9.3 Fundamentals of Big Data Analytics 519
9.4 Big Data Technologies 523
9.5 Big Data and Data Warehousing 532
9.6 In-Memory Analytics and Apache SparkTM 537
9.7 Big Data and Stream Analytics 543
9.8 Big Data Vendors and Platforms 549
9.9 Cloud Computing and Analytics 557

9.10 Location-Based Analytics for Organizations 567

9.1 OPENING VIGNETTE: Analyzing Customer Churn in a
Telecom Company Using Big Data Methods

BACKGROUND

A telecom company (named Access Telecom [AT] for privacy reasons) wanted to stem the
tide of customers churning from its telecom services. Customer churn in the telecommuni-
cations industry is common. However, Access Telecom was losing customers at an alarm-
ing rate. Several reasons and potential solutions were attributed to this phenomenon. The
management of the company realized that many cancellations involved communications
between the customer service department and the customers. To this end, a task force
comprising members from the customer relations office and the information technology
(IT) department was assembled to explore the problem further. Their task was to explore
how the problem of customer churn could be reduced based on an analysis of the cus-
tomers’ communication patterns (Asamoah, Sharda, Zadeh, & Kalgotra, 2016).

BIG DATA HURDLES

Whenever a customer had a problem about issues such as their bill, plan, and call quality,
they would contact the company in multiple ways. These included a call center, company
Web site (contact us links), and physical service center walk-ins. Customers could cancel
an account through one of these listed interactions. The company wanted to see if analyz-
ing these customer interactions could yield any insights about the questions the custom-
ers asked or the contact channel(s) they used before canceling their account. The data
generated because of these interactions were in both text and audio. So, AT would have
to combine all the data into one location. The company explored the use of traditional
platforms for data management but soon found they were not versatile enough to handle
advanced data analysis in the scenario where there were multiple formats of data from
multiple sources (Thusoo, Shao, & Anthony, 2010).

There were two major challenges in analyzing this data: multiple data sources leading
to a variety of data and also a large volume of data.

1. Data from multiple sources: Customers could connect with the company by ac-
cessing their accounts on the company’s Web site, allowing AT to generate Web log
information on customer activity. The Web log track allowed the company to identify
if and when a customer reviewed his/her current plan, submitted a complaint, or

Chapter 9 • Big Data, Cloud Computing, and Location Analytics: Concepts and Tools 511

checked the bill online. At the customer service center, customers could also lodge
a service complaint, request a plan change, or cancel the service. These activities
were logged into the company’s transaction system and then the enterprise data
warehouse. Last, a customer could call the customer service center on the phone and
transact business just like he/she would do in person at a customer service center.
Such transactions could involve a balance inquiry or an initiation of plan cancella-
tion. Call logs were available in one system with a record of the reasons a customer
was calling. For meaningful analysis to be performed, the individual data sets had to
be converted into similar structured formats.

2. Data volume: The second challenge was the sheer quantity of data from the three
sources that had to be extracted, cleaned, restructured, and analyzed. Although pre-
vious data analytics projects mostly utilized a small sample set of data for analysis,
AT decided to leverage the multiple variety and sources of data as well as the large
volume of data recorded to generate as many insights as possible.

An analytical approach that could make use of all the channels and sources of data,
although huge, would have the potential of generating rich and in-depth insights from the
data to help curb the churn.

SOLUTION

Teradata Vantage’s unified Big Data architecture (previously offered as Teradata Aster)
was utilized to manage and analyze the large multistructured data. We will introduce
Teradata Vantage in Section 9.8. A schematic of which data was combined is shown
in Figure 9.1. Based on each data source, three tables were created with each table
containing the following variables: customer ID, channel of communication, date/time

Data on
ASTER

TERADATA ASTER

Online Data

Store Data

Data on
TERADATA

SQL-H connector Load_from_teradata

Callcenter Data

HCatalog
metadata
and Data
on HDFS

FIGURE 9.1 Multiple Data Sources Integrated into Teradata Vantage. Source: Teradata Corp.

512 Part III • Prescriptive Analytics and Big Data

stamp, and action taken. Prior to final cancellation of a service, the action-taken vari-
able could be one or more of these 11 options (simplified for this case): present a bill
dispute, request for plan upgrade, request for plan downgrade, perform profile update,
view account summary, access customer support, view bill, review contract, access store
locator function on the Web site, access frequently asked questions section on the Web
site, or browse devices. The target of the analysis focused on finding the most common
path resulting in a final service cancellation. The data was sessionized to group a string
of events involving a particular customer into a defined time period (5 days over all the
channels of communication) as one session. Finally, Vantage’s nPath time sequence func-
tion (operationalized in an SQL-MapReduce framework) was used to analyze common
trends that led to a cancellation.

RESULTS

The initial results identified several routes that could lead to a request for service cancel-
lation. The company determined thousands of routes that a customer may take to cancel
service. A follow-up analysis was performed to identify the most frequent routes to can-
cellation requests. This was termed as the Golden Path. The top 20 most occurring paths
that led to a cancellation were identified in both short and long terms. A sample is shown
in Figure 9.2.

This analysis helped the company identify a customer before they would cancel their
service and offer incentives or at least escalate the problem resolution to a level where
the customer’s path to cancellation did not materialize.

u QUESTIONS FOR THE OPENING VIGNETTE

1. What problem did customer service cancellation pose to AT’s business survival?
2. Identify and explain the technical hurdles presented by the nature and

characteristics of AT’s data.

3. What is sessionizing? Why was it necessary for AT to sessionize its data?

Callcenter:Bill
Dispute

Store:Bill Dispute

Store:New
Account

Store:Service
Complaint

Store:Service
Complaint

Callcenter:Service

Complaint

Online:Cancel
Service

Callcenter:Cancel
Service

Store:Cancel
Service

Callcenter:Cancel
Service

Callcenter:Bill

Dispute

Store:Bill
Dispute

Store:Cancel Service

Callcenter:
Service

Complaint

FIGURE 9.2 Top 20 Paths Visualization. Source: Teradata Corp.

Chapter 9 • Big Data, Cloud Computing, and Location Analytics: Concepts and Tools 513

4. Research other studies where customer churn models have been employed. What
types of variables were used in those studies? How is this vignette different?

5. Besides Teradata Vantage, identify other popular Big Data analytics platforms that
could handle the analysis described in the preceding case. (Hint: see Section 9.8.)

WHAT CAN WE LEARN FROM THIS VIGNETTE?

Not all business problems merit the use of a Big Data analytics platform. This situation
presents a business case that warranted the use of a Big Data platform. The main challenge
revolved around the characteristics of the data under consideration. The three different
types of customer interaction data sets presented a challenge in analysis. The formats and
fields of data generated in each of these systems was huge. And the volume was large as
well. This made it imperative to use a platform that uses technologies to permit analysis
of a large volume of data that comes in a variety of formats.

Recently, Teradata stopped marketing Aster as a separate product and has merged
all of the Aster capabilities into its new offering called Teradata Vantage. Although that
change somewhat impacts how the application would be developed today, it is still a ter-
rific example of how a variety of data can be brought together to make business decisions.

It is also worthwhile to note that AT aligned the questions asked of the data with the
organization’s business strategy. The questions also informed the type of analysis that was
performed. It is important to understand that for any application of a Big Data architec-
ture, the organization’s business strategy and the generation of relevant questions are key
to identifying the type of analysis to perform.

Sources: D. Asamoah, R. Sharda, A. Zadeh, & P. Kalgotra. (2016). “Preparing a Big Data Analytics Professional:
A Pedagogic Experience.” In DSI 2016 Conference, Austin, TX. A. Thusoo, Z. Shao, & S. Anthony. (2010). “Data
Warehousing and Analytics Infrastructure at Facebook.” In Proceedings of the 2010 ACM SIGMOD International
Conference on Management of Data (p. 1013). doi: 10.1145/1807167.1807278.

9.2 DEFINITION OF BIG DATA

Using data to understand customers/clients and business operations to sustain (and fos-
ter) growth and profitability is an increasingly challenging task for today’s enterprises. As
more and more data becomes available in various forms and fashions, timely processing
of the data with traditional means becomes impractical. Nowadays, this phenomenon is
called Big Data, which is receiving substantial press coverage and drawing increasing
interest from both business users and IT professionals. The result is that Big Data is be-
coming an overhyped and overused marketing buzzword, leading some industry experts
to argue dropping this phrase altogether.

Big Data means different things to people with different backgrounds and interests.
Traditionally, the term Big Data has been used to describe the massive volumes of data
analyzed by huge organizations like Google or research science projects at NASA. But for
most businesses, it’s a relative term: “Big” depends on an organization’s size. The point is
more about finding new value within and outside conventional data sources. Pushing the
boundaries of data analytics uncovers new insights and opportunities, and “big” depends
on where you start and how you proceed. Consider the popular description of Big Data:
Big Data exceeds the reach of commonly used hardware environments and/or capabili-
ties of software tools to capture, manage, and process it within a tolerable time span for
its user population. Big Data has become a popular term to describe the exponential
growth, availability, and use of information, both structured and unstructured. Much has
been written on the Big Data trend and how it can serve as the basis for innovation,

514 Part III • Prescriptive Analytics and Big Data

differentiation, and growth. Because of the technology challenges in managing the large
volume of data coming from multiple sources, sometimes at a rapid speed, additional
new technologies have been developed to overcome the technology challenges. Use of
the term Big Data is usually associated with such technologies. Because a prime use of
storing such data is generating insights through analytics, sometimes the term Big Data is
expanded as Big Data analytics. But the term is becoming content free in that it can mean
different things to different people. Because our goal is to introduce you to the large data
sets and their potential in generating insights, we will use the original term in this chapter.

Where does Big Data come from? A simple answer is “everywhere.” The sources
that were ignored because of the technical limitations are now treated as gold mines. Big
Data may come from Web logs, radio-frequency identification (RFID), global positioning
systems (GPS), sensor networks, social networks, Internet-based text documents, Internet
search indexes, detail call records, astronomy, atmospheric science, biology, genomics,
nuclear physics, biochemical experiments, medical records, scientific research, military
surveillance, photography archives, video archives, and large-scale e-commerce practices.

Big Data is not new. What is new is that the definition and the structure of Big Data
constantly change. Companies have been storing and analyzing large volumes of data
since the advent of the data warehouses in the early 1990s. Whereas terabytes used to be
synonymous with Big Data warehouses, now it’s exabytes, and the rate of growth in data
volume continues to escalate as organizations seek to store and analyze greater levels of
transaction details, as well as Web- and machine-generated data, to gain a better under-
standing of customer behavior and business drivers.

Many (academics and industry analysts/leaders alike) think that “Big Data” is a
misnomer. What it says and what it means are not exactly the same. That is, Big Data is
not just “big.” The sheer volume of the data is only one of many characteristics that are
often associated with Big Data, including variety, velocity, veracity, variability, and value
proposition, among others.

The “V”s That Define Big Data

Big Data is typically defined by three “V”s: volume, variety, velocity. In addition to these
three, we see some of the leading Big Data solution providers adding other “V”s, such as
veracity (IBM), variability (SAS), and value proposition.

VOLUME Volume is obviously the most common trait of Big Data. Many factors contributed
to the exponential increase in data volume, such as transaction-based data stored through
the years, text data constantly streaming in from social media, increasing amounts of sensor
data being collected, automatically generated RFID and GPS data, and so on. In the past,
excessive data volume created storage issues, both technical and financial. But with today’s
advanced technologies coupled with decreasing storage costs, these issues are no longer
significant; instead, other issues have emerged, including how to determine relevance amid
the large volumes of data and how to create value from data that is deemed to be relevant.

As mentioned before, big is a relative term. It changes over time and is perceived
differently by different organizations. With the staggering increase in data volume, even
the naming of the next Big Data echelon has been a challenge. The highest mass of data
that used to be called petabytes (PB) has left its place to zettabytes (ZB), which is a tril-
lion gigabytes (GB) or a billion terabytes (TB). Technology Insights 9.1 provides an over-
view of the size and naming of Big Data volumes.

From a short historical perspective, in 2009 the world had about 0.8 ZB of data; in
2010, it exceeded the 1 ZB mark; at the end of 2011, the number was 1.8 ZB. It is ex-
pected to be 44 ZB in 2020 (Adshead, 2014). With the growth of sensors and the Internet
of Things (IoT—to be introduced in the next chapter), these forecasts could all be wrong.

Chapter 9 • Big Data, Cloud Computing, and Location Analytics: Concepts and Tools 515

Though these numbers are astonishing in size, so are the challenges and opportunities
that come with them.

VARIETY Data today come in all types of formats—ranging from traditional databases to
hierarchical data stores created by the end users and OLAP systems to text documents,
e-mail, XML, meter-collected and sensor-captured data, to video, audio, and stock ticker
data. By some estimates, 80 to 85% of all organizations’ data are in some sort of unstruc-
tured or semi-structured format (a format that is not suitable for traditional database sche-
mas). But there is no denying its value, and hence, it must be included in the analyses to
support decision making.

VELOCITY According to Gartner, velocity means both how fast data is being pro-
duced and how fast the data must be processed (i.e., captured, stored, and analyzed)
to meet the need or demand. RFID tags, automated sensors, GPS devices, and smart
meters are driving an increasing need to deal with torrents of data in near real time.
Velocity is perhaps the most overlooked characteristic of Big Data. Reacting quickly
enough to deal with velocity is a challenge to most organizations. For time-sensitive
environments, the opportunity cost clock of the data starts ticking the moment the
data is created. As time passes, the value proposition of the data degrades and even-
tually becomes worthless. Whether the subject matter is the health of a patient, the
well-being of a traffic system, or the health of an investment portfolio, accessing the
data and reacting faster to the circumstances will always create more advantageous
outcomes.

TECHNOLOGY INSIGHTS 9.1 The Data Size Is Getting Big, Bigger,
and Bigger

The measure of data size is having a hard time keeping up with new names. We all know
kilobyte (KB, which is 1,000 bytes), megabyte (MB, which is 1,000,000 bytes), gigabyte (GB,
which is 1,000,000,000 bytes), and terabyte (TB, which is 1,000,000,000,000 bytes). Beyond that,
the names given to data sizes are relatively new to most of us. The following table shows what
comes after terabyte and beyond.

Name Symbol Value

Kilobyte kB 103

Megabyte MB 106

Gigabyte GB 109

Terabyte TB 1012

Petabyte PB 1015

Exabyte EB 1018

Zettabyte ZB 1021

Yottabyte YB 1024

Brontobyte* BB 1027

Gegobyte* GeB 1030

*Not an official SI (International System of Units) name/symbol, yet.

Consider that an exabyte of data is created on the Internet each day, which equates to 250
million DVDs’ worth of information. And the idea of even larger amounts of data—a zettabyte—
isn’t too far off when it comes to the amount of information traversing the Web in any one year.
In fact, industry experts are already estimating that we will see 1.3 zettabytes of traffic annually

516 Part III • Prescriptive Analytics and Big Data

over the Internet by 2016—and it could jump to 2.3 zettabytes by 2020. By 2020, Internet traffic
is expected to reach 300 GB per capita per year. When referring to yottabytes, some of the Big
Data scientists often wonder about how much data the NSA or FBI have on people altogether.
Put in terms of DVDs, a yottabyte would require 250 trillion of them. A brontobyte, which is
not an official SI prefix but is apparently recognized by some people in the measurement com-
munity, is a 1 followed by 27 zeros. The size of such a magnitude can be used to describe the
amount of sensor data that we will get from the Internet in the next decade, if not sooner.

A gegobyte is 10 to the power of 30. With respect to where the Big Data comes from,
consider the following:

• The CERN Large Hadron Collider generates 1 petabyte per second.
• Sensors from a Boeing jet engine create 20 terabytes of data every hour.
• Every day, 600 terabytes of new data are ingested in Facebook databases.
• On YouTube, 300 hours of video are uploaded per minute, translating to 1 terabyte every

minute.
• The proposed Square Kilometer Array telescope (the world’s proposed biggest telescope)

will generate an exabyte of data per day.

Sources: S. Higginbotham. (2012). “As Data Gets Bigger, What Comes after a Yottabyte?” gigaom.com/
2012/10/30/as-data-gets-bigger-what-comes-after-a-yottabyte (accessed October 2018). Cisco. (2016).
“The Zettabyte Era: Trends and Analysis.” cisco.com/c/en/us/solutions/collateral/service-provider/
visual-networking-index-vni/vni-hyperconnectivity-wp.pdf (accessed October 2018).

In the Big Data storm that we are currently witnessing, almost everyone is fixated
on at-rest analytics, using optimized software and hardware systems to mine large quanti-
ties of variant data sources. Although this is critically important and highly valuable, there
is another class of analytics, driven from the velocity of Big Data, called “data stream
analytics” or “in-motion analytics,” which is evolving fast. If done correctly, data stream
analytics can be as valuable as, and in some business environments more valuable than
at-rest analytics. Later in this chapter we will cover this topic in more detail.

VERACITY Veracity is a term coined by IBM that is being used as the fourth “V” to de-
scribe Big Data. It refers to conformity to facts: accuracy, quality, truthfulness, or trustwor-
thiness of the data. Tools and techniques are often used to handle Big Data’s veracity by
transforming the data into quality and trustworthy insights.

VARIABILITY In addition to the increasing velocities and varieties of data, data flows can be
highly inconsistent with periodic peaks. Is something big trending in the social media? Perhaps
there is a high-profile IPO looming. Maybe swimming with pigs in the Bahamas is suddenly
the must-do vacation activity. Daily, seasonal, and event-triggered peak data loads can be
highly variable and thus challenging to manage—especially with social media involved.

VALUE PROPOSITION The excitement around Big Data is its value proposition. A precon-
ceived notion about “Big” data is that it contains (or has a greater potential to contain) more
patterns and interesting anomalies than “small” data. Thus, by analyzing large and feature-
rich data, organizations can gain greater business value that they may not have otherwise.
Although users can detect the patterns in small data sets using simple statistical and machine-
learning methods or ad hoc query and reporting tools, Big Data means “big” analytics. Big
analytics means greater insight and better decisions, something that every organization needs.

Because the exact definition of Big Data (or its successor terms) is still a matter of
ongoing discussion in academic and industrial circles, it is likely that more characteristics
(perhaps more “V”s) are likely to be added to this list. Regardless of what happens, the
importance and value proposition of Big Data is here to stay. Figure 9.3 shows a concep-
tual architecture where Big Data (at the left side of the figure) is converted to business

http://gigaom.com/2012/10/30/as-data-gets-bigger-what-comes-after-a-yottabyte

http://gigaom.com/2012/10/30/as-data-gets-bigger-what-comes-after-a-yottabyte

http://cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/vni-hyperconnectivity-wp.pdf

http://cisco.com/c/en/us/solutions/collateral/service-provider/visual-networking-index-vni/vni-hyperconnectivity-wp.pdf

Chapter 9 • Big Data, Cloud Computing, and Location Analytics: Concepts and Tools 517

insight through the use of a combination of advanced analytics and delivered to a variety
of different users/roles for faster/better decision making.

Another term that is being added to Big Data buzzwords is alternative data. Application
Case 9.1 shows examples of multiple types of data in a number of different scenarios.

MOVE

DATA
PLATFORM

Fast Data Loading
& Availability

Filtering &
Processing

Deep History:
Online Archival

Data Mgmt.
(data lake)

MANAGE ACCESS
Marketing Marketing

Executives

Operational
Systems

Customers
Partners

Frontline
Workers

Analysts

Data
Scientists

Engineers

USERS

Applications

Intelligence

Data
Mining

Math
and Stats

Languages

ANALYTIC TOOLS
& APPS

ERP

SCM

CRM

Images

Audio
and Video

Machine
Logs

Text

Web and
Social

SOURCES

lntelligence
Predictive Analytics

Operational Intelligence

INTEGRATED DATA
WAREHOUSE

Data Discovery
Fast-Fail Hypothesis Testing

Path, Graph, Time-Series Analysis
Pattern Detection

INTEGRATED DISCOVERY
PLATFORM

FIGURE 9.3 A High-Level Conceptual Architecture for Big Data Solutions. Source: Teradata Company.

Getting a good forecast and understanding of the situ-
ation is crucial for any scenario, but it is especially
important to players in the investment industry. Being
able to get an early indication of how a particular
retailer’s sales are doing can give an investor a leg
up on whether to buy or sell that retailer’s stock even
before the earnings reports are released. The prob-
lem of forecasting economic activity or microclimates
based on a variety of data beyond the usual retail data
is a very recent phenomenon and has led to another

buzzword—“alternative data.” A major mix in this
alternative data category is satellite imagery, but it also
includes other data such as social media, government
filings, job postings, traffic patterns, changes in park-
ing lots or open spaces detected by satellite images,
mobile phone usage patterns in any given location at
any given time, search patterns on search engines, and
so on. Facebook and other companies have invested
in satellites to try to image the whole globe every day
so that daily changes can be tracked at any location

Application Case 9.1 Alternative Data for Market Analysis or Forecasts

(Continued )

518 Part III • Prescriptive Analytics and Big Data

and the information can be used for forecasting. Many
interesting examples of more reliable and advanced
forecasts have been reported. Indeed, this activity is
being led by start-up companies. Tartar (2018) cited
several examples. We mentioned some in Chapter 1.
Here are some of the examples identified by them
and many other proponents of alternative data:

• RS Metrics monitored parking lots across the
United States for various hedge funds. In 2015,
based on an analysis of the parking lots, RS Met-
rics predicted a strong second quarter in 2015
for JC Penney. Its clients (mostly hedge funds)
profited from this advanced insight. A similar
story has been reported for Wal-Mart using car
counts in its parking lots to forecast sales.

• Spaceknow keeps track of changes in factory
surroundings for over 6,000 Chinese factory
sites. Using this data, the company has been
able to provide a better idea of China’s indus-
trial economic activity than what the Chinese
government has been reporting.

• Telluslabs, Inc. compiles data from NASA and
European satellites to build prediction models
for various crops such as corn, rice, soybean,
wheat, and so on. Besides the images from the
satellites, they incorporate measurements of
thermal infrared bands, which help measure
radiating heat to predict health of the crops.

• DigitalGlobe is able to analyze the size of a
forest with more …

Place your order
(550 words)

Approximate price: $22

Calculate the price of your order

550 words
We'll send you the first draft for approval by September 11, 2018 at 10:52 AM
Total price:
$26
The price is based on these factors:
Academic level
Number of pages
Urgency
Basic features
  • Free title page and bibliography
  • Unlimited revisions
  • Plagiarism-free guarantee
  • Money-back guarantee
  • 24/7 support
On-demand options
  • Writer’s samples
  • Part-by-part delivery
  • Overnight delivery
  • Copies of used sources
  • Expert Proofreading
Paper format
  • 275 words per page
  • 12 pt Arial/Times New Roman
  • Double line spacing
  • Any citation style (APA, MLA, Chicago/Turabian, Harvard)

Our guarantees

Delivering a high-quality product at a reasonable price is not enough anymore.
That’s why we have developed 5 beneficial guarantees that will make your experience with our service enjoyable, easy, and safe.

Money-back guarantee

You have to be 100% sure of the quality of your product to give a money-back guarantee. This describes us perfectly. Make sure that this guarantee is totally transparent.

Read more

Zero-plagiarism guarantee

Each paper is composed from scratch, according to your instructions. It is then checked by our plagiarism-detection software. There is no gap where plagiarism could squeeze in.

Read more

Free-revision policy

Thanks to our free revisions, there is no way for you to be unsatisfied. We will work on your paper until you are completely happy with the result.

Read more

Privacy policy

Your email is safe, as we store it according to international data protection rules. Your bank details are secure, as we use only reliable payment systems.

Read more

Fair-cooperation guarantee

By sending us your money, you buy the service we provide. Check out our terms and conditions if you prefer business talks to be laid out in official language.

Read more
Open chat
1
You can contact our live agent via WhatsApp! Via + 1 929 473-0077

Feel free to ask questions, clarifications, or discounts available when placing an order.

Order your essay today and save 20% with the discount code GURUH