Webinar: Delivering on the Promise of Business Value from Data Lakes

The Bloor Group

Watch this webinar now

In this episode of Hot Technologies, industry thought leader Wayne Eckerson explains how a new wave of technology is enabling visual analysis and discovery of nearly any type of data.

He is joined by Steve Wooledge of Arcadia Data, who will showcase his company's visual analytics platform that provides native BI for data lakes.

Watch this webinar to learn about:

  • Key challenges of getting value from your data lake
  • How native business intelligence can drive more insights 
  • Customer success stories

Transcript

[00:00:01.899]
ladies and gentlemen

[00:00:04.099]
hello and welcome back once

[00:00:06.099]
again too hot Technologies

[00:00:08.500]
of 2018 that's

[00:00:10.599]
exactly right my name is there a cabin us I

[00:00:12.900]
will be your moderator for today's event

[00:00:15.400]
delivering on the promise

[00:00:17.500]
business value from data

[00:00:19.500]
Lakes that's what everybody wants for the delay

[00:00:21.500]
can tell you right now we have a great lineup

[00:00:23.500]
for you today yours truly have to stop

[00:00:25.500]
there and my good buddy Wayne actress in the Mexican group

[00:00:27.500]
as dialed in today as well as my

[00:00:29.600]
other good friend Steve will inch of Arcadia

[00:00:31.699]
it data or did I threw a couple presentations

[00:00:34.399]
and hopefully a demo as well but

[00:00:36.399]
I do want to share some other quick Thoughts

[00:00:38.500]
with you this web cap is part of a

[00:00:40.500]
whole program and and that includes an

[00:00:43.200]
assessment which is designed to frankly

[00:00:45.399]
helps users like yourselves understand

[00:00:47.899]
where you are in the date of Lake Journey

[00:00:50.200]
as they say in the business to really help you

[00:00:52.200]
figure out what your next step should be

[00:00:54.200]
paid upon feedback from

[00:00:56.200]
yourself and from your peers

[00:00:58.299]
so many of you should have seen a

[00:01:00.399]
survey or Nessa SEPTA it pops

[00:01:02.500]
up when you registered and this is

[00:01:04.500]
what you would see when you got there how

[00:01:06.700]
much value does your data link provided business

[00:01:09.099]
users what we've done here

[00:01:11.200]
in this is something that Wayne's came up with and his

[00:01:13.299]
team called rate my data it's

[00:01:15.700]
a very cool Technologies a platform

[00:01:17.900]
again for helping companies understand how

[00:01:20.099]
they can basically size up

[00:01:22.299]
to other organizations and determined

[00:01:24.400]
what are the next best step forward

[00:01:26.599]
so obviously any kind of investment

[00:01:28.799]
is serious at the data link required a lot of

[00:01:30.799]
thought a lot of time a lot of resources and

[00:01:33.000]
of course some money thrown in there as well and

[00:01:35.000]
you want to make sure you're making the right decisions so

[00:01:37.299]
when can I put this concept in his team

[00:01:39.599]
of developing rate my data which

[00:01:41.599]
is an actual application at the web-based

[00:01:43.599]
platform for acceptance

[00:01:45.700]
and what happens when you take the intestinal

[00:01:47.799]
it takes about 5 minutes and you get

[00:01:49.799]
a personalized report so

[00:01:52.000]
the report can take a number of different

[00:01:55.099]
directions in terms of what you get

[00:01:57.200]
and you can stay this is one of the things that you

[00:01:59.200]
get after you take to report this was

[00:02:01.200]
a medium score this is a high

[00:02:03.299]
score and you can see how you compare to

[00:02:05.500]
other companies sewing ideas to help you

[00:02:07.599]
figure out should you move this

[00:02:09.699]
way to give you that way which direction should you

[00:02:11.699]
take for your organization to optimize

[00:02:14.000]
the value of what you can guess

[00:02:16.599]
and I also like to push a pole

[00:02:18.900]
if I could very quickly let

[00:02:22.199]
me see here I'm on the road myself today don't

[00:02:24.500]
open this bowl and you can see the question

[00:02:26.900]
is how do you or planned to

[00:02:28.900]
give users access to your data Lakes

[00:02:30.900]
your five different options there a b

[00:02:33.300]
c d and e

[00:02:36.500]
I'm going to give this a couple of minutes okay folks

[00:02:38.800]
are starting to answer so a s development tools

[00:02:41.300]
be a direct sequel access via

[00:02:43.599]
traditional bi tools to use head

[00:02:45.599]
dude natives he is other

[00:02:47.599]
and for other if you would just

[00:02:49.599]
go ahead and chat or something if you have something

[00:02:51.800]
else going on and this once again it's

[00:02:53.800]
all part of our desire to understand what's

[00:02:56.099]
going on out there in the marketplace we are researchers

[00:02:58.300]
manilus way to myself at least of

[00:03:00.599]
course companies like the Arcadia are always

[00:03:02.800]
very curious to understand what's actually

[00:03:04.800]
happening out there in the real world what

[00:03:06.900]
are you folks doing what are you seeing we

[00:03:09.000]
always want to understand your thoughts and your

[00:03:11.000]
perspective on these things because like

[00:03:13.099]
I said this is a very challenging space

[00:03:15.199]
you want to make sure you do things right and

[00:03:17.300]
so that's why we have this this hole

[00:03:19.300]
platforms for you I'll give this one more

[00:03:22.000]
second maybe Wayne actress in Under throw

[00:03:24.000]
a quick question over to you if

[00:03:26.300]
you want to talk to spirits 2nd of Bounce rate

[00:03:28.599]
my data or about this particular one

[00:03:30.599]
to be guys some time working with Arcadia

[00:03:32.599]
to put all this together any thoughts

[00:03:34.699]
on what you hopefully from

[00:03:36.800]
the survey as you as you read what

[00:03:38.900]
people write them in their personal

[00:03:41.000]
assessments Sprite

[00:03:44.000]
soda likes traditionally been the

[00:03:46.699]
main of data scientist so we want

[00:03:48.800]
to explore how many regular

[00:03:50.900]
Joes are actually using

[00:03:52.900]
the data like in getting value fun

[00:03:54.900]
crafts it's about 20

[00:03:57.000]
questions but

[00:03:59.800]
it takes four minutes to complete because

[00:04:02.000]
of the way we designed the questions with

[00:04:04.099]
the word scale it doesn't change for

[00:04:06.099]
each question and

[00:04:08.099]
it expands the six

[00:04:10.199]
categories that

[00:04:13.199]
we looked at each category had about two

[00:04:15.199]
questions and then we get through and some muscle

[00:04:17.500]
quick questions the

[00:04:19.500]
cool thing about this report with had over a

[00:04:21.500]
hundred people I take

[00:04:23.500]
the assessment so far as you can go in and

[00:04:25.500]
filter it the filter button

[00:04:27.600]
you can see me at the right hand corner and

[00:04:30.000]
that's where you can further

[00:04:32.300]
a benchmark yourself

[00:04:34.300]
against that more targeted Mich

[00:04:36.500]
group based on company

[00:04:38.500]
size region and

[00:04:41.800]
Industry among other things

[00:04:43.800]
so then it's pretty useful tool

[00:04:46.000]
give you a quick snapshot of where you stand

[00:04:48.300]
in terms of data usage

[00:04:50.300]
for a regular jobs

[00:04:52.600]
okay good stops let

[00:04:54.800]
me go ahead and close this whole we got a good

[00:04:56.800]
number of responses I can say the results 31%

[00:05:00.199]
say traditional bi tools 12%

[00:05:02.699]
direct equal access 8% hey dude

[00:05:04.800]
latest 4% development

[00:05:07.100]
tools or percent other and

[00:05:09.300]
that's that included the entire break

[00:05:11.600]
out of folks oh well no answer is 42%

[00:05:13.800]
so far but thank you very much for taking

[00:05:15.800]
that smokes well done and

[00:05:18.199]
now let me hop back over here I'm

[00:05:21.000]
going to push this next slide for an actual to give

[00:05:23.000]
the keys of the castle to mr.

[00:05:25.100]
Eckerson and when you can share

[00:05:27.199]
your screen or use the flight in there

[00:05:33.500]
connection to set

[00:05:40.899]
I'm on a Mac it takes a little while so

[00:05:42.899]
you seen one screen or two air or

[00:05:45.300]
just one looks good

[00:05:47.300]
okay topic

[00:05:51.199]
of the data but you can

[00:05:53.199]
business value I think I decide

[00:05:55.399]
to start from the beginning which

[00:05:57.800]
was comparing data warehouses

[00:06:01.000]
to date a legs in many ways

[00:06:03.100]
they legs her response to the background

[00:06:11.800]
this is the traditional data warehouse architecture

[00:06:14.300]
the benefit

[00:06:16.500]
of it for

[00:06:21.600]
all your data multiple

[00:06:25.000]
Source systems this is just one place

[00:06:27.199]
design for

[00:06:29.199]
queries not transactions I

[00:06:31.600]
design in a way that simplifies user

[00:06:33.800]
access and speeds play across

[00:06:44.399]
all of your

[00:06:47.300]
single version of Truth the common metric

[00:06:49.500]
and standard what

[00:06:51.500]
we found some very ideal for supporting

[00:06:54.000]
your call the ports and dashboards

[00:06:56.300]
that was the promise

[00:06:58.600]
of the baskets still is

[00:07:00.600]
irrelevant Palms today but we hit

[00:07:02.600]
a lot of speed bumps along the

[00:07:04.699]
way and would stay there lakes

[00:07:07.399]
in many ways are an answer

[00:07:09.399]
to the data warehouses speaker setup

[00:07:12.100]
email on right

[00:07:15.199]
it takes a long time to model

[00:07:17.699]
and load the data so

[00:07:19.800]
takes long time to build the

[00:07:22.100]
change it's an army of people to

[00:07:24.500]
maintain the same so can be costly infrastructure

[00:07:28.300]
built on relational databases tends

[00:07:31.899]
to be costly

[00:07:33.899]
as well it's skills well

[00:07:36.300]
not

[00:07:39.699]
really designed for multi structured data

[00:07:41.699]
so what we have found

[00:07:43.899]
is that data warehouses

[00:07:46.500]
are really good for answering my own questions

[00:07:49.199]
things that they traditionally it

[00:07:51.600]
Department requirements for

[00:07:54.000]
and developed

[00:07:56.000]
reports and dashboards and point

[00:07:58.000]
those at the warehouse with some

[00:08:00.100]
drill down to do some root cause analysis

[00:08:02.399]
around those but it's

[00:08:04.399]
really last good for answering me questions

[00:08:06.800]
with mint types of data

[00:08:09.100]
in many ways back

[00:08:11.199]
problems with the data warehouse were

[00:08:13.699]
at we were asking

[00:08:15.699]
it to do more than it was really designed

[00:08:17.800]
for so around 2010

[00:08:20.899]
I do of which most

[00:08:23.199]
eight legs are built it the

[00:08:25.199]
scene hard and

[00:08:27.199]
leg circles people

[00:08:30.000]
advocated wiping a data warehouse away and

[00:08:32.100]
spirally replacing with the

[00:08:34.100]
suitcase date of lakes

[00:08:36.700]
because they solve

[00:08:38.799]
a lot of the problems that they were intimately

[00:08:41.700]
scalable on a low

[00:08:43.899]
cost scale-out distributed

[00:08:47.000]
architecture any

[00:08:49.100]
kind of data Thanksgiving schema on

[00:08:51.100]
read basically you're just dumping dating

[00:08:53.299]
till file system you don't have

[00:08:55.299]
to model it first giving

[00:08:57.700]
users especially those feed

[00:09:00.100]
a hungry how are users in beta scientists

[00:09:02.600]
instant access to data

[00:09:13.500]
sick of having to wait

[00:09:16.100]
for the IT department to model at Daytona all

[00:09:18.700]
so built on open sores so

[00:09:21.000]
the cost of software licenses that order

[00:09:24.799]
of magnitude less than a day Patrician

[00:09:27.200]
Banquet house and probably put

[00:09:30.299]
it in there you never have to move it

[00:09:32.299]
because you can bring different compute

[00:09:34.500]
engine light

[00:09:37.600]
flashing engines into

[00:09:40.899]
a cluster so you

[00:09:43.899]
never move the data out of the out

[00:09:45.899]
of the cluster into something else

[00:09:48.500]
the promise

[00:09:50.600]
of Adida lakes that we found our hearts

[00:09:52.600]
and liabilities Wanted

[00:09:55.500]
still relatively new technology

[00:09:57.700]
evolving

[00:09:59.700]
quite fast very

[00:10:02.100]
fast as matter fact but it is still

[00:10:04.299]
working out issues a

[00:10:06.399]
lot of software built

[00:10:08.600]
on open-source from Apache Foundation

[00:10:11.200]
lot of different projects

[00:10:13.500]
starting stopping overlapping

[00:10:16.100]
hard to keep track of which is

[00:10:18.200]
why we need visa to Traditions

[00:10:20.200]
from scum bags like Cloudera in

[00:10:22.799]
map artwork Works.

[00:10:25.299]
Return after a c failing complex

[00:10:27.399]
to manage especially that the infrastructure

[00:10:29.600]
in the hardware which ends up costing

[00:10:31.700]
a lot more money than people think and

[00:10:34.799]
the skills are in the skills

[00:10:36.799]
to do it this is not cheap either

[00:10:39.100]
I'll pull out workload processing

[00:10:41.200]
perspective he

[00:10:43.500]
can do real fast Full

[00:10:45.600]
Table scans but less it's

[00:10:47.899]
less good for complex multi-table

[00:10:49.899]
joints and I buy a taxi

[00:10:52.299]
and this

[00:10:54.299]
is a really

[00:10:57.200]
good for data scientists

[00:10:59.200]
and power user you want

[00:11:01.600]
instant access to Raw data it's

[00:11:04.100]
going to be open at data dump

[00:11:07.700]
it's also good for offloading

[00:11:09.799]
TL workloads

[00:11:11.899]
And archiving large

[00:11:14.299]
finds a detailed data so you don't have to upgrade

[00:11:17.600]
it so

[00:11:21.399]
what you're saying is that they Lake

[00:11:23.700]
some water response to the depression

[00:11:25.799]
Sears warehouse has its own deficiencies that

[00:11:28.200]
in fact the warehouse is

[00:11:30.299]
well suited to address so

[00:11:34.100]
if you look at the underlying Technologies

[00:11:36.200]
behind the gate or a sand Beetle a specific

[00:11:38.700]
Warehouse relational database

[00:11:41.000]
and and distributed

[00:11:43.100]
file system Call of Duty

[00:11:45.200]
in 2010

[00:11:48.500]
the attributes for each were almost

[00:11:50.500]
Paul Robeson you can go down the list

[00:11:52.500]
year and see that on every

[00:11:54.500]
single characteristic they are

[00:11:56.500]
completely different ones interactive

[00:11:58.600]
the other batch one offers

[00:12:01.000]
at each other's

[00:12:03.200]
java-based one scheme on rape skin

[00:12:05.200]
laundry just doped up

[00:12:08.899]
but these two technologies are kind

[00:12:11.000]
of plane boat

[00:12:13.200]
frenzy and and competitor in

[00:12:15.299]
in the ecosystem and

[00:12:17.299]
its results that capabilities

[00:12:19.799]
are starting to convert Excel relational

[00:12:22.000]
databases starting to take out a lot of capability

[00:12:24.700]
to do then

[00:12:26.700]
and vice a versa and both

[00:12:28.899]
of them are going to the cloud so

[00:12:31.399]
I've been trying to help companies

[00:12:33.399]
trying to figure out what the dividing line is between

[00:12:35.500]
these two worlds and it's is a little

[00:12:37.600]
bit difficult and I did some moving targets

[00:12:39.899]
but we're seeing is that the

[00:12:41.899]
data warehouse right now in database

[00:12:44.200]
is great for sporting business

[00:12:46.799]
people McGregor Jose is that a saying

[00:12:48.899]
for supporting specific

[00:12:51.500]
types of work clothes and acquire complex

[00:12:53.700]
multi-table joined large x

[00:12:55.799]
users that's really

[00:12:57.899]
good for supporting existing reports

[00:12:59.899]
and dashboards is doing analysis

[00:13:02.399]
on those things

[00:13:04.299]
glad to do it

[00:13:06.399]
send the date of legs are really good for did scientists

[00:13:09.000]
and power users want instant access

[00:13:11.000]
to the rod data

[00:13:13.000]
or slightly scrubbed

[00:13:15.399]
clean data Scout

[00:13:18.000]
really good for big table span

[00:13:20.200]
stands large batch jobs

[00:13:22.299]
and ETL off upload

[00:13:24.799]
data offload and Dave science and

[00:13:27.600]
boxes so when you put

[00:13:29.799]
these two together you realize that you know

[00:13:31.799]
what why should we have one vs.

[00:13:34.000]
the other what we were doing because

[00:13:37.100]
how you unify these into

[00:13:39.299]
its own parents ecosystem

[00:13:41.700]
or architecture

[00:13:43.299]
what I figured as that this for

[00:13:45.399]
action and maybe more but this is what I've come

[00:13:47.399]
up with so far what

[00:13:49.600]
is that you space distinct

[00:13:52.000]
World environment

[00:13:56.399]
city of a date of Lake right now and

[00:13:59.000]
then sings next to it and

[00:14:01.799]
integrated Sports

[00:14:07.899]
Warehouse for rent by owner racial

[00:14:10.000]
database and I did like once on Duke

[00:14:13.200]
that's how it exist in most companies

[00:14:15.200]
to do I bought today but

[00:14:17.399]
there are other options as well try

[00:14:22.799]
to rebuild a successful

[00:14:26.799]
I will call these more data

[00:14:28.799]
Marts and data warehouses but that's where you

[00:14:30.799]
takes I'm going to see people on Hadoop

[00:14:32.899]
technology lights are there is

[00:14:34.899]
Impala build a stable

[00:14:37.000]
open a dimensional Cena of sorts

[00:14:39.000]
and then runs against those

[00:14:41.100]
those tables in the

[00:14:43.100]
date of lakes

[00:14:44.600]
stop a third option would be to use

[00:14:46.799]
a bi tool to cut a recreate dimensional

[00:14:49.799]
view updated in the legs and

[00:14:52.100]
in this case there virtual view the Galaxy

[00:14:55.200]
S6 outside in the lakes in

[00:14:57.500]
query stated inside the inside

[00:14:59.899]
leg may pull back data

[00:15:02.000]
into its own cash where

[00:15:04.700]
I can optimize that day that your system

[00:15:07.299]
faster Foreman

[00:15:09.799]
Columbia Direction the last option

[00:15:11.899]
is where the analysts will actually

[00:15:13.899]
sticks inside the daylight and

[00:15:15.899]
resides natively into Aquarius

[00:15:19.399]
of data from there and I believe

[00:15:21.799]
Steve willing to talk about that

[00:15:24.299]
approach since that's the one that Arcadia

[00:15:26.500]
takes so

[00:15:29.500]
where are we today

[00:15:31.600]
well it's a

[00:15:33.600]
good thing 2 years ago was

[00:15:37.200]
you would put most

[00:15:39.899]
of your data and Hadoop

[00:15:42.200]
or S3 as

[00:15:44.600]
the clouds are starting to emerge

[00:15:47.000]
as a platform of choice for many

[00:15:49.000]
companies going to do a lot of your ETL

[00:15:51.500]
work in spark

[00:15:54.100]
and

[00:15:56.299]
your data scientist

[00:15:58.299]
me also use Sparkle libraries for

[00:16:00.299]
doing machine learning and

[00:16:02.700]
then once you refine the day you

[00:16:05.500]
threw those owns on the left once you free

[00:16:07.700]
or you can push that data

[00:16:10.100]
into a relational database

[00:16:12.299]
serving to

[00:16:14.899]
support your big data warehouse

[00:16:17.100]
Stow the benefits

[00:16:19.600]
here are basically

[00:16:21.600]
you get that the steel bility support

[00:16:24.200]
for multi structured data and scheme on read

[00:16:26.399]
that I did a lake supports weather

[00:16:28.399]
in hot or not but

[00:16:31.000]
you still have the disadvantage of hopping and

[00:16:33.000]
duplicating data into a data warehouse

[00:16:36.899]
which anytime you duplicate large

[00:16:38.899]
signs of data and are

[00:16:41.000]
making a transition between orthogonal Technologies

[00:16:43.700]
you can run into problems and

[00:16:45.799]
expense the what is

[00:16:49.700]
a little bit different environment where

[00:16:52.500]
companies are using

[00:16:54.799]
big data analytics tools like

[00:16:56.899]
Arcadia to actually

[00:16:58.899]
query the day that it's been

[00:17:00.899]
transformed in the lake whether it's a

[00:17:02.899]
stupid or are you

[00:17:05.700]
see that Transformations happiness Park using

[00:17:08.599]
python oftentimes I

[00:17:10.799]
or commercial tools as well sometimes

[00:17:13.700]
Richland way down and should be at the

[00:17:15.799]
landing area as well

[00:17:18.000]
so that's where we seem to be going in

[00:17:20.900]
this big data world the

[00:17:23.599]
benefits years are the

[00:17:25.599]
same as the other scalability multi

[00:17:27.700]
structured data sports team on Reed's

[00:17:29.799]
but you don't copy if you were okay.

[00:17:32.000]
Just keep it in one place in the lake and

[00:17:34.900]
get the metro access to non data scientist

[00:17:37.500]
beer so

[00:17:42.700]
I get the contents of coaches

[00:17:44.900]
that it's new and I got

[00:17:47.599]
there is no relational database and that might speak

[00:17:49.799]
some people out but

[00:17:52.400]
it's something to consider as some of the

[00:17:54.400]
bleeding edge and Leading Edge companies are going

[00:17:56.400]
in this direction and

[00:17:58.700]
maybe more than that of

[00:18:00.799]
that so this

[00:18:04.599]
is a dirty or picture of that architectural to

[00:18:06.599]
little bit more detail but basically saying

[00:18:08.700]
the same thing I just a

[00:18:10.700]
lot of different type lines that come

[00:18:12.799]
out of that that don't use

[00:18:15.000]
data Hobbs sporting

[00:18:17.500]
different types of users in applications

[00:18:19.900]
and its environment were seeing

[00:18:22.099]
can be built that only by traditional data

[00:18:24.299]
engineer 🙂

[00:18:26.299]
no but also Big Data

[00:18:28.400]
engineer super fur open source

[00:18:30.400]
library in trolls

[00:18:33.400]
so I thought I'd mention

[00:18:35.400]
this attachment stats for

[00:18:37.599]
running right now only takes 4 minutes

[00:18:39.700]
of your time I might give you some

[00:18:41.700]
interesting insights on how you've

[00:18:43.700]
progressed with your day the lake

[00:18:45.900]
so with that

[00:18:48.000]
I'm going to turn this back over to

[00:18:50.099]
Eric alright

[00:18:54.599]
I'm going to turn it over to Steve

[00:18:57.099]
Wooldridge Xbox feel free

[00:18:59.099]
to ask questions up those two slides to just a second

[00:19:01.200]
with that Steve will it take

[00:19:04.000]
it away great

[00:19:06.099]
thanks I want

[00:19:08.200]
to share my screen as well

[00:19:11.700]
Seattle care

[00:19:16.000]
yes I can

[00:19:18.099]
alright everyone

[00:19:20.200]
my name is Steve I work for Acadia data

[00:19:22.299]
happy to be here at work in the industry along

[00:19:24.599]
with Eric and Wayne for gas

[00:19:27.099]
by 15 18 years now I've worked

[00:19:29.099]
at relational database companies I

[00:19:31.799]
work that doesn't tell just companies like this

[00:19:33.900]
it's object's works that

[00:19:35.900]
I do plender's like Matt Barr and

[00:19:38.500]
Eric arcade data and it's fun

[00:19:40.500]
to see how the industry's evolving how

[00:19:42.500]
customers are using different Technologies in

[00:19:44.500]
different ways and what I'll talk about is that

[00:19:46.900]
fourth option that Wayne pointed out of getting

[00:19:49.000]
value from business

[00:19:51.000]
users her business she's getting value

[00:19:53.200]
from data Lakes using but we call

[00:19:55.200]
native bi and Analytics

[00:19:57.500]
and in a quick snapshot in

[00:19:59.500]
your back in 2008 when I was at a

[00:20:01.700]
small day of a startup company everybody talks

[00:20:03.900]
about big data and it was all about

[00:20:05.900]
moving from

[00:20:07.900]
structured data to Wilden multi structured

[00:20:10.099]
data things that didn't fit as neatly

[00:20:12.299]
into rows and columns things

[00:20:14.599]
like Jason and wipees

[00:20:16.900]
traffic did off of sensors in those

[00:20:18.900]
types of things batch

[00:20:22.299]
workloads within 2 dupas Wayne

[00:20:24.400]
talks about to move to more interactive in real time as

[00:20:26.400]
well of course big data in terms of volume

[00:20:28.400]
but a lot of

[00:20:30.400]
complexity there until we're way past that the platforms

[00:20:32.900]
of evolves and relational databases

[00:20:35.000]
heavy pallets but I think

[00:20:37.099]
the need for agility on this data

[00:20:39.299]
where people don't want to have to structure it all in advance

[00:20:41.299]
is Wayne talks about they want to be able to carry

[00:20:44.000]
things as they lay without doing a lot

[00:20:46.000]
of structure and in some cases you want to be able

[00:20:48.000]
to query search indices events

[00:20:51.000]
documents like document database is

[00:20:53.000]
right things like that and you

[00:20:55.000]
don't want to Nestle transform data and

[00:20:57.099]
have it modeled perfectly into the

[00:20:59.500]
environment for now so she might want to do

[00:21:01.700]
the transformation in place in the date of our house or

[00:21:03.799]
in the in the data link or discover

[00:21:06.099]
the day before you transform into something turn

[00:21:08.200]
on the lights for reporting so there's been a lot of changes

[00:21:11.200]
because of the nature of hardware and the

[00:21:14.000]
cops coming down but are up servation

[00:21:16.000]
at Arcadia it's been that there really hasn't been a

[00:21:18.099]
lot of innovation around the bi technology

[00:21:20.099]
if you will from

[00:21:22.200]
where we've been iming sequel to Spill the the

[00:21:24.500]
language of choice of business users but bi

[00:21:27.400]
tools don't necessarily handle the scale

[00:21:29.900]
or the complexity of

[00:21:31.900]
spaghetti on the platforms are out there so that's

[00:21:34.000]
really what we are set out to do

[00:21:36.000]
and if your a Game of Thrones fan the question

[00:21:38.000]
becomes you know can you stand up to

[00:21:40.400]
the big data analytics requirements

[00:21:42.500]
and it sucks out there

[00:21:44.500]
so people don't know so John

[00:21:46.599]
Snow who decided to charge an entire

[00:21:48.700]
Army on his own so I'm just

[00:21:50.700]
kind of fun and really we

[00:21:52.900]
found at the company with a mission of connecting

[00:21:55.200]
business users to Big Data is Wayne talks

[00:21:57.299]
about data Lakes today

[00:21:59.099]
tend to be the realm of data

[00:22:01.099]
scientist developer tools

[00:22:03.400]
those types of things where you want to go out to the rod data

[00:22:05.500]
you don't necessarily want it structure do you want

[00:22:07.500]
to lose any signal in the noise so

[00:22:10.000]
to speak but there's a lot of value

[00:22:12.099]
in that day late as well that business users can get

[00:22:14.099]
access to and I

[00:22:16.200]
take data Lakes today off and gets treated

[00:22:18.200]
as like a development environment to find

[00:22:20.200]
discover information but if the date

[00:22:22.299]
is already there and you found some

[00:22:24.299]
insights why not share it with a lot of people

[00:22:26.299]
from where the day two steps you don't necessarily

[00:22:28.299]
need to move it into special-purpose system

[00:22:30.599]
that handles concurrency in

[00:22:32.599]
and SL A's

[00:22:34.799]
and dynamic workload Management in that kind of stuff

[00:22:36.799]
so that's kind of

[00:22:38.799]
what we do we've been around since 2012

[00:22:41.500]
we've gotten some awards from

[00:22:43.500]
Gardner and Forester and different technology

[00:22:46.200]
areas like Gartner

[00:22:48.400]
I'm sorry what I do what the forcible

[00:22:50.500]
call head do Native bi but

[00:22:52.799]
just a different category from traditional

[00:22:55.099]
bi and we had a lot of big customers

[00:22:57.299]
with data Lakes

[00:22:59.000]
trading a standard for the day

[00:23:01.099]
like for their bi so which is different from

[00:23:03.099]
their data warehouse this is not replacing

[00:23:05.500]
data warehousing this is new use cases

[00:23:07.900]
new data and do applications that companies

[00:23:10.799]
like Citibank or Procter & Gamble are

[00:23:13.299]
the plane using

[00:23:15.700]
daily extend our kid is a front

[00:23:17.799]
end to the business either so what I'd like to do is talk through

[00:23:20.000]
the reason why people are choosing

[00:23:22.000]
to Pi standard the

[00:23:24.000]
benefits of that and show you an extra product

[00:23:26.000]
demo and we can get into questions and answers

[00:23:28.400]
from there so

[00:23:30.099]
again the premises that there

[00:23:32.099]
is a whole host of bi two of those been around

[00:23:34.299]
for decades I've used to work for one and they're

[00:23:36.299]
optimize and work extremely well on

[00:23:38.400]
relational technology not

[00:23:40.599]
necessarily optimize to work on the

[00:23:42.599]
openness and scale that's

[00:23:44.700]
available within non-relational

[00:23:48.599]
data Lakes that's not to

[00:23:50.599]
say you can't build a day like conceptually on a

[00:23:52.599]
relational database going to talk about more

[00:23:55.299]
than you do Basin sod based objects

[00:23:57.900]
or types of data lakes that are out there

[00:24:00.599]
if you think about it why

[00:24:02.599]
people are choosing to be a standard for

[00:24:04.599]
the Enterprise is because the

[00:24:07.299]
traditional relational database was

[00:24:09.700]
highly optimized to take advantage of

[00:24:12.299]
the hardware that was available at the time user

[00:24:16.299]
closed environments and I don't mean close

[00:24:18.500]
in a negative way but I work for

[00:24:20.500]
teradata and the amount of engineering the

[00:24:22.500]
performance you can squeeze out a relational

[00:24:24.500]
database is amazing

[00:24:26.799]
in the work they do to integrate the hardware

[00:24:28.900]
is fantastic but you can't take a processing

[00:24:31.099]
engine and run it on the same Hardware

[00:24:33.200]
where those Dave Espinoza running

[00:24:35.500]
because it's just not the time to handle that kind

[00:24:37.700]
of work clothes so if you were to take a bi service

[00:24:40.000]
they are running on the data warehouse can't

[00:24:42.500]
really do that so bi servers

[00:24:44.599]
growing up over time or

[00:24:46.799]
a tiered model right and you've got stated

[00:24:49.299]
that sits on the server on a

[00:24:51.400]
desktop these are scale up environments for

[00:24:53.400]
the most part you can talk to these but they're not distributed

[00:24:55.900]
systems and what kinda panties

[00:24:58.000]
you got to load data once in the warehouse

[00:25:00.099]
and Justin transformation you're going to

[00:25:02.099]
feel bloated into the bi server you got to secure

[00:25:04.200]
in multiple points you got

[00:25:06.200]
a semantically Irish Maps back to the

[00:25:08.299]
schema that's been defined in the database

[00:25:10.799]
and then you typically will optimize

[00:25:13.099]
the physical model maybe

[00:25:15.299]
twice once in the day to wear house if

[00:25:17.400]
you want to do the optimization there you

[00:25:19.799]
can also optimize the performance going to be a server

[00:25:22.000]
right so that's a

[00:25:24.500]
choice that people make from an architecture

[00:25:26.500]
perspective but often times you're doing and both

[00:25:28.500]
places so it becomes just a

[00:25:30.599]
little bit of extra work right and is value

[00:25:32.700]
in that but you don't have a native Connection in

[00:25:35.200]
many cases two things like semi-structured

[00:25:37.299]
data if you take Json files

[00:25:39.500]
as an example you're going to flatten that to put

[00:25:41.599]
it into a table or the bi tools

[00:25:43.599]
required to be a more of a relational format

[00:25:45.799]
before they can you create against

[00:25:47.799]
it and these are not parallel

[00:25:49.799]
environments that I talked about so the idea with

[00:25:52.099]
Arcadia was to take the openness

[00:25:54.200]
of systems like Apache

[00:25:56.500]
to do but to allow you to have multiple practice

[00:25:58.500]
in engines running on the

[00:26:00.500]
nodes where the date of cysts you

[00:26:02.799]
know the whole rabbit the whole idea is bring

[00:26:05.000]
the processing to the data don't

[00:26:07.200]
take the data for the processing especially

[00:26:09.200]
when you're talking about petabytes of data

[00:26:11.200]
so we took advantage of that we built to be at

[00:26:13.299]
server essentially that runs on

[00:26:15.599]
the notes in the date of leg pulley

[00:26:17.700]
parallel distributed system performance

[00:26:21.700]
and things like that that will get into the

[00:26:24.000]
backend stuff is also usually valuable

[00:26:26.299]
we inherit the security this already in place

[00:26:28.599]
we do the physical modeling

[00:26:30.900]
in place we give

[00:26:33.700]
you a business semantically Road access the

[00:26:35.799]
data to look in to find out what business terms

[00:26:38.000]
directly in place we have an understanding of

[00:26:40.200]
where the day does located from a distribution

[00:26:42.299]
perspective hashing

[00:26:44.700]
etcetera so that we can trade query

[00:26:46.799]
plans for highly Optimas for distribute

[00:26:48.900]
environment and you only do it once and

[00:26:51.000]
you get native connectivity to those beta pipes

[00:26:53.700]
we can handle complex bites like Json

[00:26:55.900]
natively and it's a fully parallel

[00:26:58.099]
environment

[00:26:59.400]
and you would say well you know I don't necessarily

[00:27:01.400]
want to have all my did in a day like and for

[00:27:03.400]
sure yeah every company I mean it's mind-boggling

[00:27:05.599]
out of the garden show last year and they said her last

[00:27:07.900]
week and they show this graph of people in

[00:27:10.000]
the number of systems they have hundreds

[00:27:12.500]
of databases and big organization

[00:27:14.599]
so yeah you can connect other systems

[00:27:16.599]
into a system like Arcadia

[00:27:18.799]
one of the things we've been innovating

[00:27:20.900]
with is the Apache Kafka projects

[00:27:23.400]
and our partners confluent where they've

[00:27:26.200]
treated a sequel interface to

[00:27:28.299]
real-time streaming data called KC

[00:27:30.500]
Guan so we've been a great of the bat you can have real-time

[00:27:32.500]
streaming data coming into your dashboard

[00:27:34.900]
which would trigger an alert and then you can

[00:27:36.900]
drill down into detail with in the daylight

[00:27:39.200]
or within your data

[00:27:41.200]
warehouse environment where are your mongodb environment

[00:27:43.599]
solar other types of systems where you store

[00:27:45.700]
data so it's not just

[00:27:47.700]
for the data link but that's where you get a lot of performance

[00:27:50.200]
in the benefits for people that want to discover information

[00:27:52.500]
and then also production Isis

[00:27:54.799]
within one system

[00:27:56.799]
that you can trust that with butts

[00:27:59.200]
out there this is another way of saying some of the same

[00:28:01.500]
things but day where is bi

[00:28:03.799]
architecture is really a scale of

[00:28:05.799]
environment again optimized for the

[00:28:08.299]
technology at the time but that requires data

[00:28:10.599]
movement multiple points of Security

[00:28:12.599]
Management and Cetera there are

[00:28:14.700]
vendors out there that's come up with

[00:28:16.900]
a middleware application of

[00:28:18.900]
this sort of has a Band-Aid approach to allow traditional

[00:28:21.700]
data warehouse bi tools to

[00:28:23.799]
connect to another

[00:28:26.099]
data store within the Buster

[00:28:28.200]
which they put on an edge note

[00:28:30.299]
or a series of edge knows and

[00:28:32.400]
that works okay but you still got

[00:28:34.700]
multiple points of integration and

[00:28:36.799]
security and really you

[00:28:38.799]
don't have the semantic knowledge about the

[00:28:40.799]
date of this down on the data knows you're still pulling

[00:28:42.900]
data house you've lost information

[00:28:45.000]
on what's where are the filters and Aggregates

[00:28:47.299]
where should those be applied in your simply passing

[00:28:49.400]
sequel back and forth between the

[00:28:51.400]
bi tool and that middleware

[00:28:55.299]
box that's interpreting things

[00:28:57.400]
in and pulling data back from the Dayton OH and

[00:28:59.400]
those those kids are typically built

[00:29:01.500]
on a nightly batch run and you got to build

[00:29:03.500]
the cubes in advance based on what you think people

[00:29:05.900]
would want a query they lose

[00:29:07.900]
a little bit of the fee the

[00:29:10.299]
freestyle nature of being

[00:29:12.299]
able to query ad hoc against the full thing

[00:29:14.299]
versus data native or

[00:29:16.400]
native bi which pushes down not only

[00:29:18.400]
the processing office also

[00:29:20.500]
the semantic knowledge and

[00:29:22.500]
what we can do then is

[00:29:24.599]
build Dynamic Cash's

[00:29:26.799]
of data based on the actual usage

[00:29:28.900]
of people that are issuing queries it doesn't

[00:29:31.000]
have to be filled it on Advanced

[00:29:33.000]
based on what we think people will query but let's

[00:29:35.299]
learn over time and build ways

[00:29:37.299]
to accelerate Performance Based on the actual usage

[00:29:39.500]
of the bus there and because we have that semantic

[00:29:41.799]
knowledge and everything else

[00:29:43.900]
from the cruise that are coming into the system

[00:29:46.200]
we like to call this a lossless,

[00:29:48.799]
like High Fidelity high-definition television

[00:29:52.400]
or audio you want your analytics

[00:29:54.400]
to be high definition as well and if you lose in

[00:29:56.500]
the granularity cuz your aggregate pointed

[00:29:58.799]
out to handle the low scale of a bi server

[00:30:00.900]
you're not going to have that for Fidelity access

[00:30:04.099]
and then the performance is something that really stands

[00:30:06.299]
out as well this is a benchmark from one

[00:30:08.900]
of our customers who is trying

[00:30:11.299]
to give business

[00:30:13.400]
analyst B's Russia customer service reps

[00:30:15.500]
for a telecommunications

[00:30:18.299]
company that might run a

[00:30:20.400]
webinar platform some of the one we're on

[00:30:22.599]
to get really high performance

[00:30:25.000]
at high queries 430 concurrent users

[00:30:27.200]
so they can troubleshoot things like

[00:30:29.200]
you know where the bottlenecks in the the

[00:30:31.799]
webinar platform and I'll head over there

[00:30:33.799]
and what are some different questions when

[00:30:35.799]
needed for BuyBacks for a customer so the

[00:30:38.099]
point of this is not to compare us with a sequel

[00:30:40.299]
on Hadoop engine but we actually let her speak want

[00:30:42.599]
to do connectors to data but

[00:30:44.900]
we are putting a proper bi server

[00:30:46.900]
if he will within the day like which gives

[00:30:49.000]
you much better concurrence

[00:30:51.599]
performance to build a return

[00:30:54.000]
results that's in a reasonable

[00:30:57.000]
amount of time for people

[00:30:59.500]
that's the kind of performance that we

[00:31:01.700]
see in again the way we do that is through

[00:31:03.799]
some Innovative technology we call it

[00:31:05.799]
smart acceleration there's some

[00:31:07.799]
patent-pending technology around this

[00:31:09.799]
and again in terms of agility we

[00:31:12.200]
want end-users to be able to access to the data

[00:31:14.299]
like Buster gets granular

[00:31:16.400]
access to all the data

[00:31:19.500]
ask any question they want and then we

[00:31:21.500]
have these analytical views that are recommended by

[00:31:23.799]
the system based on machine learning we're looking

[00:31:25.900]
at what tables are being accessed which careers are

[00:31:27.900]
being run on a frequent basis and

[00:31:29.900]
we recommend back to the admin person hey

[00:31:32.000]
you might want to rearrange

[00:31:34.000]
and create some aggregate tables

[00:31:36.099]
that we store back in hdfs RS3

[00:31:38.700]
deploy those out to the next

[00:31:40.700]
time I Curry comes in we can make it cost based

[00:31:42.799]
optimization decision on where

[00:31:44.799]
to write that query for better performance so

[00:31:46.799]
you get a hundred likes better performance

[00:31:48.900]
than just scanning the entire data

[00:31:51.099]
Lake and trying to bring back results test

[00:31:54.000]
that's a big difference in again it's

[00:31:56.000]
incremental is dynamic it's based on actual

[00:31:58.000]
usage. To build the entire Cuban advance

[00:32:00.200]
to see which is a huge Advantage from

[00:32:02.700]
a admin perspective

[00:32:04.900]
and really the whole premise of

[00:32:06.900]
the data Lake was to provide more data agility

[00:32:08.900]
and again whether it's done a relational or Hadoop

[00:32:10.900]
doesn't really matter the point is you've got to be

[00:32:12.900]
able to bring in data and

[00:32:15.500]
iterate on it quickly so if you take a day like

[00:32:17.599]
you just treat it like another database

[00:32:19.599]
where you want to

[00:32:22.000]
near the bi service

[00:32:25.000]
forcing you to take that out of the

[00:32:27.000]
Dead awake secure it there do the

[00:32:29.000]
performance modeling in the bi server before

[00:32:31.900]
you can actually start doing date of discovery in

[00:32:34.200]
a coherent way and then oh

[00:32:36.400]
by the way are we forgot to put a dimension

[00:32:38.900]
into that Cube that someone want to look at so now

[00:32:40.900]
you got to go back to ITT after the added

[00:32:43.099]
Dimension before you can go back to the 2nd generation or

[00:32:45.299]
the and Federation of analysis that

[00:32:47.299]
you want to do

[00:32:48.799]
and I've got to live this in

[00:32:50.900]
my previous lives and it takes a lot of time and

[00:32:53.900]
cost to to Mansion environment like that

[00:32:56.000]
and you lose the business of jelly which

[00:32:58.099]
is the whole promise of Hadoop and data

[00:33:00.200]
lakes in the first place so

[00:33:02.200]
we've changed all that and we allow you

[00:33:04.200]
to analyze data as

[00:33:06.299]
allies if you will in its

[00:33:08.299]
original form yes you

[00:33:10.400]
do Symantec modeling on it so you can put different

[00:33:12.400]
terms against it and you can interpret

[00:33:14.400]
Jason and look at what is the steamer

[00:33:16.500]
that's embedded within the metadata but

[00:33:18.500]
go ahead and analyze do the discovery

[00:33:20.500]
before you have to do production

[00:33:23.799]
ization or optimization of that

[00:33:25.900]
data structure in a way that allows you to production

[00:33:27.900]
eyes it and you know what a lot of

[00:33:29.900]
business analyst might want to just find some insight

[00:33:32.000]
and then go take it and do something with that they're not necessary

[00:33:34.400]
going to deploy this out 200

[00:33:36.400]
control user so you're not

[00:33:38.400]
the phone smiling step is optional

[00:33:40.799]
so it gives you a lot of flexibility

[00:33:43.000]
faster time to value and we just move

[00:33:45.000]
that entire analytic and visual Discovery

[00:33:47.000]
step from Step 6 all the way to step 3

[00:33:49.200]
so that's delivering on the promise of

[00:33:51.299]
agility cousin the date of John

[00:33:54.000]
summary that's what we do we provide business user

[00:33:56.099]
access all the data complex

[00:33:58.400]
schemas whatever on

[00:34:01.500]
an eighth of architecture that gives you that type

[00:34:03.799]
governance and Integrated Security on

[00:34:05.799]
the data Lake and allows you to deploy

[00:34:08.099]
to hundreds and thousands of users in

[00:34:10.400]
a high concurrent type of a workload still

[00:34:12.800]
with that I

[00:34:14.800]
will switch over and I'll give you a

[00:34:16.800]
demo but we're talking about

[00:34:24.199]
I want to hear so

[00:34:26.199]
I've got Arcadia date of running

[00:34:28.400]
here and a web browser environment

[00:34:30.699]
everything is HTML HTML5 browser

[00:34:33.699]
basis no browser plugins there's

[00:34:35.800]
no just stop download everything you see is just delivered

[00:34:38.300]
via the web the date of all sits back in the day the

[00:34:40.300]
lake which is huge from a governance

[00:34:42.300]
and the compliance perspective you not to worry about people

[00:34:44.500]
downloading date of the desktop and hotter than that

[00:34:46.599]
it said it's all browser-based and

[00:34:48.800]
what I'm going to do is show you her simple demo

[00:34:51.500]
that gives you a sense of

[00:34:53.599]
the tool and then I'll show you a more robust application

[00:34:56.300]
around cybersecurity which is

[00:34:58.400]
a big yusuke's that we have with some of her clients,

[00:35:02.400]
which I can't mention summer include

[00:35:04.800]
US Agencies like the Department of Agriculture

[00:35:06.800]
and believe it or not but while

[00:35:08.900]
I want to do in this case is I want to show you how to connect

[00:35:11.000]
to data and build a simple dashboard so

[00:35:13.099]
in this case I'm going to click on data

[00:35:15.300]
it pulls up on my connections you can

[00:35:17.300]
see things like solar and to Doo and

[00:35:19.500]
caca

[00:35:21.699]
and things in a relational technology

[00:35:23.800]
as well I've got a very simple

[00:35:25.900]
data set that was crater on

[00:35:27.900]
TV viewership data I call it a curse

[00:35:30.000]
in TV sure

[00:35:32.699]
handle fear next TV show Wayne and

[00:35:35.300]
all it's going to do is pull up a pallet here or

[00:35:37.599]
dashboard is going to bring into tablet

[00:35:40.000]
tabular data in

[00:35:42.300]
a way that doesn't necessarily do

[00:35:44.699]
a lot for me as an analyst so

[00:35:46.699]
let me go ahead and and look at this a different

[00:35:48.699]
way so I'm going to edit

[00:35:51.800]
this and I

[00:35:53.800]
want to look at all

[00:35:56.300]
of your shifts are all viewers

[00:35:58.300]
overtime so I'll bring in a

[00:36:00.300]
day to train as much as mention four measures

[00:36:02.599]
of bringing the record count

[00:36:04.699]
let me just refresh this is kind of filter

[00:36:06.800]
and down looking at the data okay so for different

[00:36:08.900]
dates overtime I see the number of total viewers

[00:36:11.300]
at any point in time across a lot of different

[00:36:13.300]
TV channel so as an

[00:36:15.300]
Advertiser I might want to know what shows are people

[00:36:17.300]
watching what time of day are they watching those types of

[00:36:19.300]
things so let's visualize this in a

[00:36:21.300]
different way and what we've done

[00:36:23.300]
this week embedded machine learning not

[00:36:25.800]
only into the back end for performance optimization

[00:36:27.800]
but also under the front end it's

[00:36:29.800]
just people with the right

[00:36:31.800]
ways to visualize did if I just click this

[00:36:33.800]
button this is Explorer visuals this

[00:36:35.900]
is actually showing me different visualization

[00:36:38.099]
types using my data

[00:36:40.199]
and I can compare what's

[00:36:42.400]
the most useful to me do I want to do a

[00:36:44.400]
standard bar chart scatter plot

[00:36:46.599]
bubble thing or maybe this

[00:36:48.699]
calendar heatmap would be interesting since we're

[00:36:50.699]
talking about time so here I'm

[00:36:52.699]
looking at the total numbers of viewers and it

[00:36:54.900]
got hot spots on things like Sunday

[00:36:56.900]
when maybe sports are happening or

[00:36:59.199]
I know your favorite gospel show could

[00:37:02.400]
be something on Thursday but I'm not really sure what that

[00:37:04.400]
is one thing that use. Stay

[00:37:06.400]
that way to my dashboard

[00:37:10.300]
and I'll close this and I've

[00:37:12.300]
got that visual now I want to look at something

[00:37:14.300]
a little bit different which would be to break

[00:37:17.400]
down things like

[00:37:20.000]
the you're

[00:37:23.900]
not here the

[00:37:26.199]
channels and in different things that I want to

[00:37:28.199]
look at and I can't

[00:37:31.900]
reach my edit button

[00:37:36.300]
so now I look at Channel and

[00:37:38.300]
program as my dimensions for

[00:37:41.000]
the measures will look at record town again

[00:37:44.099]
precious visuals now it's going to break down

[00:37:46.199]
a little bit more by what are the top

[00:37:48.199]
channels and which programs are most popular

[00:37:50.400]
but again I want to visualize that so just

[00:37:52.500]
speaks me a little bit better this again

[00:37:54.699]
will recommend some different visual types you got

[00:37:56.800]
your standard bar charts and Scatter Plots

[00:37:59.099]
we've got some things like Network

[00:38:01.500]
grass down here which are really interesting

[00:38:03.699]
and dynamic but not something I want to still

[00:38:05.900]
use for televisions you ain't your

[00:38:08.300]
traditional horizontal bar

[00:38:10.300]
chart oh

[00:38:12.699]
you know what I forgot to put the filter

[00:38:16.500]
so that's going to take a while

[00:38:18.500]
to fullback but the final results

[00:38:20.699]
and would be this bar chart on the right which

[00:38:23.900]
shows the different channels and

[00:38:26.199]
what shows are popular and I've added a filter

[00:38:28.300]
which allows the user to select

[00:38:30.900]
something like the BET Network

[00:38:32.900]
I wanted to talk to an Advertiser

[00:38:35.000]
about what are the best days advertise on BET

[00:38:37.300]
and which shows now you can see

[00:38:39.300]
what those shows are pretty quick cleaning around

[00:38:41.500]
a hotspot and filter that

[00:38:43.500]
buy okay this day Wednesday with a

[00:38:45.500]
bunch of people what shows are actually watching what

[00:38:47.500]
was the BET Hip Hop Awards

[00:38:50.400]
it was death of a funeral in those types of things this

[00:38:52.599]
is a very simple visual the show

[00:38:54.699]
you what you can do connecting

[00:38:56.900]
a different date individual way have

[00:38:59.099]
a cool part yeah that's very simple

[00:39:01.300]
use case but big Enterprises want to

[00:39:03.300]
build applications that really

[00:39:05.400]
help him do things like stop cyber

[00:39:07.500]
security attacks so we've built

[00:39:10.099]
an application with one of our partners Cloudera

[00:39:12.599]
around the Apache spot project so

[00:39:14.599]
this is an open source project which

[00:39:16.800]
rooms together a community response

[00:39:19.099]
to the best way to visualize

[00:39:21.099]
threat from a network and

[00:39:23.500]
user perspective as well as in points in the network

[00:39:25.800]
there's machine learning algorithms that are included

[00:39:28.000]
as part of this project and Arcadia Spartan

[00:39:30.199]
this is to contribute visualization

[00:39:33.099]
types that can help people spot

[00:39:35.199]
issues and swimming spots in

[00:39:38.300]
a visual way onto

[00:39:40.699]
not only detect

[00:39:43.099]
attacks but to do Greenfield threat hunting

[00:39:45.099]
and things like that so shows

[00:39:48.000]
a little bit better any idea here is

[00:39:50.000]
you can have something like an executive summary of you

[00:39:52.199]
it's just a square that's created

[00:39:54.300]
it's bubbling up and it's using machine learning

[00:39:56.300]
that bubble up high potential potential

[00:39:58.400]
threats from an end-user perspective RN

[00:40:00.500]
points and there's some ways

[00:40:02.699]
that you can

[00:40:04.699]
feed back to that model learn over time but this

[00:40:06.699]
is cuz you that bird's eye view of what's Happening across

[00:40:08.900]
your entire Enterprise if

[00:40:10.900]
you look into the network from a security analyst

[00:40:13.000]
I'm going to look at not slow data

[00:40:15.000]
over time within my environment and

[00:40:17.199]
again machine learning is being used in the bottom

[00:40:19.199]
left of Bubble Up suspicious activity in

[00:40:22.199]
as a security analyst I know a lot about the

[00:40:24.199]
systems that are there I might look at the stop threat

[00:40:26.500]
and say well you know this is a demo

[00:40:28.900]
environment so I think that's a pretty

[00:40:30.900]
low score in terms of its Kratts

[00:40:33.099]
I'm going to respect that some of these other

[00:40:35.199]
ones or maybe a little bit higher but

[00:40:37.500]
that feeds back then to the model

[00:40:39.500]
and can learn over time and improve the accuracy of

[00:40:42.000]
what the machine is doing the detective potential

[00:40:44.199]
threats and you can do things like

[00:40:46.400]
you no pick the time

[00:40:48.500]
slider here and it's going to change the

[00:40:51.000]
network draft we're over here looking at

[00:40:53.000]
the flow of

[00:40:55.900]
data between endpoints beginning

[00:40:59.199]
to end and I selected

[00:41:02.300]
to little data

[00:41:04.500]
this is a demo application but then

[00:41:06.500]
you can look at the thickness of the line to understand what

[00:41:08.599]
what are the strong connections

[00:41:10.599]
between systems and identify the specific endpoint

[00:41:13.300]
what are the other endpoint says

[00:41:15.300]
connected to and you can drill down into the

[00:41:17.300]
ultimate be all the detail that's there

[00:41:19.400]
so specific IP address we can click

[00:41:21.400]
into that and it takes me into an exploration when

[00:41:23.900]
they were now there's some workflow to spend to find

[00:41:25.900]
what's safe for the security hours for they can collect

[00:41:28.800]
call the state in one plays Lex

[00:41:31.000]
click on the username proxy

[00:41:33.400]
actions etcetera I'm not

[00:41:35.400]
security analyst by nature but essentially

[00:41:38.300]
they can do some analysis here and then if you want to share

[00:41:40.300]
that with other people it's a simple

[00:41:42.300]
as going up here you can email that to you

[00:41:44.300]
to somebody or get the URL

[00:41:47.000]
copy and paste that in the case management system

[00:41:49.199]
in when someone logs in with the right authentication

[00:41:51.400]
they can see all this data in context of

[00:41:53.400]
where they at analyst left off in the

[00:41:55.400]
respiration so that's the kind of thing that we

[00:41:57.400]
want to do on a very large scale for Innovations

[00:41:59.699]
around cybersecurity iot

[00:42:02.300]
systems

[00:42:04.500]
you know just just general marketing

[00:42:06.800]
applications and things like that but hopefully that gives

[00:42:08.800]
you a flavor of what what

[00:42:11.300]
we do and just going

[00:42:13.599]
to wrap things up for my perspective so if you want

[00:42:15.599]
to learn more about our case there's some links we can

[00:42:17.699]
leave out then I'll turn it back to Eric

[00:42:19.900]
to take any questions with Gus

[00:42:23.300]
great and we do have questions

[00:42:25.400]
in a bunch of the questions so let me

[00:42:27.400]
just kind of Dive Right In you were shown

[00:42:29.500]
a pretty cool demo there by the way I love that spot

[00:42:32.000]
I love the machine there any service

[00:42:34.000]
things what to look at that's one

[00:42:36.099]
of the best it seems to me use cases

[00:42:38.099]
for machine learning is too kind of help separate

[00:42:40.900]
wheat from chaff and the point

[00:42:42.900]
you in the right direction so lots

[00:42:44.900]
of questions here one is the

[00:42:46.900]
TV of your favorite song is that is

[00:42:49.400]
that structured or unstructured what kind of data

[00:42:51.500]
with that yeah

[00:42:55.400]
I think that was just an open datasets

[00:42:57.500]
I didn't skip a day

[00:42:59.500]
to set myself I believe that was structured data

[00:43:01.500]
so we have examples of taking

[00:43:03.599]
Jason and and visualize me on

[00:43:05.599]
the fly without flattening but that was not the case

[00:43:07.599]
with the TV viewership data

[00:43:09.900]
okay

[00:43:11.900]
and let's see one of the users

[00:43:14.000]
are staying date according to their own

[00:43:16.300]
policies at their company they have

[00:43:18.300]
access denied to external file sharing

[00:43:20.400]
or storage on do

[00:43:22.400]
you guys have any ways around that or what would your recommendations

[00:43:24.800]
be there I

[00:43:28.099]
didn't quite get that so they they are not

[00:43:30.199]
allowed to do file sharing was at the question right

[00:43:32.599]
external file sharing or storage

[00:43:38.400]
I'm not exactly sure what they're asking

[00:43:40.699]
but I guess the point I would make

[00:43:42.800]
is that again that all the data stays

[00:43:45.000]
in the data Lake you can have the

[00:43:47.000]
option to allow somebody to download

[00:43:49.000]
that data to Excel or whatever

[00:43:51.000]
they want to do with it but in some cases we

[00:43:53.199]
have a very large Healthcare organization that

[00:43:55.300]
did not want like their

[00:43:57.400]
challenge was they had traditional bi tools

[00:43:59.599]
were people are downloading data to their desktops

[00:44:01.800]
and they had to try and keep track of all

[00:44:03.800]
that from the data governance perspective so

[00:44:07.199]
that was one of the reasons they wanted a native b i

[00:44:09.199]
approach for the date I could spit like people could still

[00:44:11.300]
do their very reporting an ounces in one

[00:44:13.300]
environment but they should be restricted

[00:44:16.099]
from point I did it down to a separate system

[00:44:19.000]
somewhere okay

[00:44:21.300]
good and here's another nothing about it really detailed

[00:44:23.699]
questions folks and if we don't get to yours

[00:44:25.800]
in this event we will forward them on to our presenters

[00:44:27.900]
today here's a question

[00:44:30.199]
from an attendee asking what kind of

[00:44:32.199]
measures are there in the architecture for

[00:44:34.199]
data sensitivity like masking

[00:44:36.500]
data of sensitive information is better can

[00:44:38.599]
you speak to that event

[00:44:42.900]
yeah from a high level will interpret

[00:44:45.000]
or I'm sorry we will inherit any

[00:44:47.199]
existing security protocols

[00:44:50.500]
that are in the underlined state of platform

[00:44:52.800]
so meaning if you've got a patchy Century

[00:44:54.800]
or Ranger or

[00:44:57.000]
some security model within spot

[00:44:59.000]
environment we inherit those role-based access controls

[00:45:02.000]
now last I checked I don't know

[00:45:04.599]
maybe more

[00:45:07.099]
things around masking in touch with a lot of pain so

[00:45:09.099]
he's a third party system after

[00:45:11.900]
getting some of the names now but we partner

[00:45:14.000]
while those third parties around security space

[00:45:16.300]
that would do the data masking and things like that so

[00:45:18.300]
we don't provide the full granularity

[00:45:20.599]
of all those different security

[00:45:23.400]
protocols within our system but that's

[00:45:25.500]
why you have a lot of these third-party providers so

[00:45:27.599]
but anything right in a project like

[00:45:29.599]
Century Ranger we Leverage

[00:45:32.400]
he's a really really good question and Wayne

[00:45:34.800]
feel free to chime in on itself throw it over to Steve

[00:45:36.800]
first and then wait if you want I'm in.

[00:45:39.500]
What Concept in the architecture replaces

[00:45:41.900]
the cube datamart like as a nest

[00:45:44.199]
and I think that's just the

[00:45:46.199]
mass of parallel nature of the technology

[00:45:48.500]
right Steve

[00:45:51.800]
not that that's a very informed question

[00:45:53.900]
and I was careful I

[00:45:56.199]
try to be careful not to stay the word Cube

[00:45:58.300]
because I think a cube has a very

[00:46:00.599]
specific notion in people's heads like

[00:46:02.800]
a space where

[00:46:04.800]
again your building that cube

[00:46:07.099]
in advance you can build multiple

[00:46:09.199]
cubes and it becomes an IT

[00:46:11.400]
overhead and burdens at some point so we've

[00:46:13.400]
tried to minimize

[00:46:15.400]
that burden as I was talking about we call them

[00:46:17.500]
analytical views but

[00:46:19.500]
they're really much more than a few there is actually an

[00:46:21.500]
ocean of dimensionality and

[00:46:24.199]
physical data structure

[00:46:26.500]
in modeling both on disc on

[00:46:28.599]
the sound system as well as some things we doing memory

[00:46:30.599]
so you can call it a

[00:46:32.599]
dynamic Cube if you want but we don't force

[00:46:34.900]
you to build it all in advance we build

[00:46:37.599]
it incrementally over time and would recommend

[00:46:39.900]
dimensionality to

[00:46:41.900]
add to speed up crew performance over

[00:46:44.099]
time so we can we call it in and I'll work

[00:46:46.199]
with you and it's part of our smart acceleration process

[00:46:48.400]
but yeah you could call the cube if you want

[00:46:50.500]
but it doesn't have some of the Legacy baggage of what

[00:46:52.599]
people think about on

[00:46:54.699]
this and not trying to bash a space for sure anything

[00:46:56.800]
like that but it's just was designed for different

[00:46:58.900]
purpose right yeah

[00:47:01.300]
where do you want to come down there real quick

[00:47:04.900]
yeah we're definitely moving away from the

[00:47:07.000]
world of physical cubes weather

[00:47:09.900]
in space

[00:47:12.300]
or products like that or

[00:47:14.300]
or even at

[00:47:17.199]
McLeod it seems that with the

[00:47:19.300]
all the horsepower that we have been in

[00:47:22.599]
memory processing we

[00:47:24.800]
can build these dimensional views on

[00:47:27.000]
the fly or

[00:47:29.500]
maintain them in a dynamic cash

[00:47:32.099]
like Steve the same tube lot of vendors for

[00:47:34.099]
doing this kind of thing

[00:47:36.099]
each with their own twist on it and

[00:47:40.199]
I will talk to you they're doing all right and he

[00:47:42.199]
doesn't get paid for a lot of

[00:47:44.199]
Spanish Oak Point point of daylight hours in

[00:47:47.699]
their own still out in

[00:47:49.900]
memory cache

[00:47:51.699]
so I'd like but I can't get

[00:47:53.900]
to him because they that doesn't go

[00:47:55.900]
anywhere just stays in the do not

[00:47:58.699]
moving outside of the Cross her but

[00:48:02.300]
yeah there's a lot of ways to skin a cat but

[00:48:04.500]
that the days of the physical Cube seem

[00:48:06.699]
to be pretty much over

[00:48:11.300]
yeah we got a bunch more good questions here folks

[00:48:13.300]
thanks for sending these in so

[00:48:15.300]
when attending here is asking about metadata

[00:48:18.599]
can you talk about metadata management

[00:48:21.199]
and and what kind of functionality have there I'll

[00:48:23.300]
be there some open source projects that have

[00:48:25.599]
tried to address that and I know

[00:48:27.699]
that the news with other analysts that at

[00:48:30.699]
least in the early days of the head you think his system

[00:48:33.000]
it felt like they're all making some of the same

[00:48:35.599]
old mistakes Again by not really focusing

[00:48:37.900]
on metadata but Steven kind of talk about how

[00:48:41.000]
metadata is handled with Arcadia

[00:48:45.599]
sure medivators handle

[00:48:47.800]
just like you would expect within a a

[00:48:49.800]
bi tool we have the notion

[00:48:51.800]
of a semantic layer is in one example

[00:48:54.000]
where the business person and

[00:48:56.300]
you know Finance can name

[00:48:59.800]
tables and columns within the data

[00:49:01.900]
Lake based on the business terms

[00:49:03.900]
that they're familiar with that could be a different term

[00:49:06.800]
within a different department let's stay under

[00:49:09.400]
the mat back to the same day that we can also leverage

[00:49:12.699]
any metadata that's been defined you

[00:49:15.900]
know things have been set up in the hive metastore and

[00:49:18.199]
other systems that the like that we

[00:49:20.300]
partner with companies like trifacta

[00:49:23.000]
and streamsets the lever Jenny ingest

[00:49:25.300]
transformation types of things that they do and

[00:49:27.400]
and data catalogs and things like that with water

[00:49:29.599]
line so there's a robust

[00:49:31.699]
Metadate environment around these

[00:49:33.800]
things which we all know is required to have a

[00:49:35.800]
governed environment early

[00:49:38.400]
on it do it didn't have as much to

[00:49:40.500]
Dalton that area but I think there's a

[00:49:42.500]
robust ecosystem all around metadata

[00:49:44.500]
management did it go weed

[00:49:46.900]
in the state of lakes now we we take advantage of

[00:49:49.000]
that as you would expect to be at school too and

[00:49:51.199]
you can again to find some of that with our school

[00:49:53.199]
and do some lightweight transform

[00:49:55.500]
works and naming of Metadate and things

[00:49:57.699]
but we again Reliance third parties that

[00:49:59.699]
specialize in those things just like you

[00:50:01.800]
would with in a relational environment

[00:50:04.400]
okay good and the folks we will stop

[00:50:06.599]
writing stop the hour Yours Truly has hardstop

[00:50:08.800]
I'll try to get you as many more these questions

[00:50:10.800]
as I can there's a really good one that I

[00:50:12.800]
think that you guys shine a bit on

[00:50:14.800]
the question is around how you get data

[00:50:17.199]
into Arcadia of

[00:50:19.199]
course you guys are right inside the cluster

[00:50:21.599]
there right so that's the specific question

[00:50:23.800]
is something like how does a how

[00:50:26.199]
does a user-defined their own

[00:50:28.300]
data Lake another input to Arcadia

[00:50:30.400]
data and that's kind of proud that

[00:50:32.599]
you solved on the box right by embedding

[00:50:35.000]
right inside the club drink

[00:50:38.199]
yeah exactly there's no spensive

[00:50:40.400]
importing and moving data we're

[00:50:42.400]
just creating and it's kind

[00:50:44.599]
of funny we actually have them a

[00:50:46.800]
little internal discussion around the

[00:50:49.000]
naming cuz if I go to this data

[00:50:51.000]
tabs here I think I'm still sharing

[00:50:55.800]
what we called data sets are actually

[00:50:58.000]
what I call is semantic layer

[00:51:00.199]
of view

[00:51:02.199]
of data that's already in this Buster

[00:51:04.199]
so we just to find what's

[00:51:08.199]
in that data sent through metadata

[00:51:10.400]
when you pull it up and

[00:51:12.500]
I'm not as up to speed on

[00:51:14.500]
all these different connections and things like that but

[00:51:16.500]
yeah there's no date of movement it's

[00:51:18.500]
just training reviews on the top of the

[00:51:20.500]
day of this in the environment and defining

[00:51:22.800]
these different semantic layers

[00:51:24.800]
which you can again name them different

[00:51:26.900]
terminology and measures

[00:51:29.199]
and things like that

[00:51:31.599]
okay good and yeah way and I'll just throw it over

[00:51:33.599]
to you real quick I got the very clever

[00:51:35.800]
move by Arcadia it seems to

[00:51:37.800]
me to embed right in there that's

[00:51:39.900]
it's the general movement in the industry

[00:51:42.199]
is away from movement right away

[00:51:44.199]
from moving data,

[00:51:46.199]
I remember way back when the date my stuff

[00:51:48.300]
here Foster Henshaw telling

[00:51:50.300]
me about the whole concept of putting

[00:51:52.500]
the processing where the date of lives that

[00:51:54.900]
the direction we seem to be going on obviously

[00:51:57.000]
have to be a very long tail to the old

[00:51:59.000]
way of doing things but. E that's pretty

[00:52:01.199]
clever way what do you think

[00:52:03.400]
yeah I've kind

[00:52:05.599]
of come out with little manifest there about

[00:52:07.800]
10 characteristics of a modern data

[00:52:09.800]
architecture and that's one of

[00:52:11.800]
them it's don't move the data so

[00:52:14.099]
we're not there yet almost

[00:52:16.400]
there because

[00:52:21.300]
MILFs most people

[00:52:23.300]
still pushing baby out into a into

[00:52:25.699]
a bi cash like

[00:52:28.199]
son of Arcadia competitors or

[00:52:30.300]
into a relational data warehouse so

[00:52:33.699]
we are still moving date around so

[00:52:35.800]
what I like about Arcadia is that

[00:52:38.000]
it does hold

[00:52:41.599]
fast to that that characteristic

[00:52:44.199]
of a modern-day architecture

[00:52:46.500]
so I'm the people are definitely

[00:52:48.699]
using the processing power of

[00:52:50.800]
scale-out and memory architecture

[00:52:52.800]
to reduce the need

[00:52:54.800]
for a lot of back and modeling free

[00:52:57.400]
I say speak

[00:53:00.000]
processing of the data in

[00:53:02.599]
a cube or a database and

[00:53:05.000]
spending more of their modeling time on the

[00:53:07.000]
phone. Where

[00:53:11.000]
their Bentley

[00:53:13.000]
creative use of potentially

[00:53:15.099]
complex data sets in the back end simple

[00:53:17.400]
find them for a user's

[00:53:20.000]
power

[00:53:23.000]
of that platform to

[00:53:25.199]
pull all that David together in real

[00:53:27.199]
time and you're cashing

[00:53:29.400]
out of getting we're absolutely needed

[00:53:31.699]
for performance and also

[00:53:35.300]
get to minimize compared to what we used to

[00:53:37.300]
do but everything was pretty aggravated

[00:53:39.400]
in there was no access to detail

[00:53:41.900]
that's right yeah that's

[00:53:44.300]
a straw in the wind the

[00:53:46.400]
questions here while

[00:53:49.599]
I'm pretty sure the answer is yes.

[00:53:53.099]
Yeah that's our preferred

[00:53:55.599]
structure and save for the data is

[00:53:57.599]
in the park a format

[00:54:00.900]
yeah I kind of figured that let's

[00:54:03.199]
see lots of other questions though,

[00:54:05.300]
to try to get to as many as

[00:54:07.300]
possible that cigarette request that came in a while ago real-time

[00:54:11.199]
data real-time streaming data.

[00:54:14.300]
Call Freddy Custom Design mechanisms

[00:54:16.800]
in the day to Lake how do you deal with streaming data

[00:54:20.500]
yes streaming data

[00:54:22.500]
today or are integration is

[00:54:24.500]
a couple different ways so your

[00:54:27.599]
people will talk about spark streaming right

[00:54:29.800]
as one mechanism and in that case

[00:54:31.800]
we wait

[00:54:33.900]
for that streaming day of the land from spark

[00:54:36.000]
into the system and then we visualize it from there

[00:54:38.000]
so it's not really real time but

[00:54:40.000]
it's your stuff second once

[00:54:42.099]
it lands we can visualize it another

[00:54:44.199]
really Innovative thing is Kafka or

[00:54:46.900]
I should say confluent release the case

[00:54:48.900]
equal interface to

[00:54:51.199]
Kafka streams a

[00:54:53.400]
caca topless topics I

[00:54:55.400]
should say so that's now generally

[00:54:57.400]
available and we are one of the early people

[00:55:00.000]
in fact was the only be I feel right now

[00:55:02.000]
that can visualize on top of

[00:55:04.099]
case equal on Fresh it's just a connection

[00:55:06.300]
to that we've got a demo up on our website you

[00:55:08.800]
can send out later to a video showing

[00:55:11.099]
that an action but that's

[00:55:13.199]
something that's in our latest price

[00:55:16.300]
release that people can download an Explorer

[00:55:18.300]
and try it today with our kitty instant

[00:55:20.500]
2K support

[00:55:23.199]
for real-time streaming within the dashboard and then

[00:55:25.400]
be able to take action

[00:55:27.900]
got time I got actually blow up a

[00:55:29.900]
quick little idea what I'm doing

[00:55:34.800]
so this was

[00:55:38.800]
Ernest we got this iot demo

[00:55:46.300]
this is another one we built against

[00:55:48.500]
our partner saw Tara and in this case we

[00:55:50.699]
want to have this is an environment where you looking

[00:55:52.699]
at the safe use

[00:55:54.800]
a fleet manager is managing a fleet of

[00:55:56.800]
cars and they want to measure what's

[00:55:58.800]
happening out in the field of these cars so you got

[00:56:00.800]
an event stream this is more of the real time information

[00:56:02.900]
about where our car

[00:56:04.900]
is located are there different incidents that

[00:56:07.000]
are happening and in real time

[00:56:09.099]
you're sort of getting that information in here

[00:56:11.099]
you can see things update

[00:56:13.400]
in this case it's just writing and data

[00:56:15.699]
into the file system either

[00:56:17.699]
for us older

[00:56:19.800]
index for sure also use some more

[00:56:21.900]
real-time sub application is not true

[00:56:23.900]
stream you're not reading it in memory but pretty

[00:56:26.500]
fast and then you can drill the detail from

[00:56:29.300]
here to look into one of those specific

[00:56:31.500]
VIN numbers of a car that just got

[00:56:33.500]
into a hazardous situation or

[00:56:35.800]
something like that and go into a detailed

[00:56:38.099]
view of what's happening so then you want to

[00:56:40.199]
look at sporulation mouses for that

[00:56:42.199]
VIN number and and different things are

[00:56:44.300]
happening again this has been one information

[00:56:47.400]
is not there but you have

[00:56:49.599]
a real-time dashboard that can be updated

[00:56:51.599]
and then drill into detail because

[00:56:54.500]
you got all in all the information in one

[00:56:56.599]
place I

[00:56:59.900]
love it I love the stuff post

[00:57:02.000]
for watching the future here that's a great quote

[00:57:04.099]
by William Gibson the future is here already it's

[00:57:06.500]
just not evenly distributed at

[00:57:09.500]
least not yet like I said earlier there

[00:57:11.500]
is going to be a very long tail to the old way of doing

[00:57:13.599]
things you heard Wayne stayed at 95%

[00:57:16.400]
of environments are still dealing with sparkly

[00:57:18.800]
batch processes and the other ways

[00:57:20.900]
of getting the job done but this is the future

[00:57:23.099]
of this is direction we're going on a big thanks

[00:57:25.199]
to Wayne for his time today and of course to

[00:57:27.199]
Steve Willis of Arcadia and data

[00:57:29.300]
you will get that assessment stop

[00:57:31.500]
up when we close out this WebEx so by

[00:57:33.500]
all means post please do take

[00:57:35.599]
the 3 to 4 minutes go through that little

[00:57:37.599]
puppy let us know if you think you can always email Yours

[00:57:39.800]
Truly info at inside analysis.

[00:57:42.099]
Com hope to hear you on the air

[00:57:44.599]
from you tomorrow on DM radio with

[00:57:47.500]
some big news as well on that front right now Coast

[00:57:49.900]
to Coast AM radio from

[00:57:51.900]
Jacksonville to Atlanta in Chicago all

[00:57:54.000]
the way out to Los Angeles hope

[00:57:56.000]
to hear you on the show sometime you know his tweets

[00:57:58.199]
me what hashtag of ham radio and what that word

[00:58:00.400]
opinion farewell folks we to archive all these

[00:58:02.400]
webcast for littlest things and doing so

[00:58:04.599]
feel free to come back share with your college etcetera

[00:58:06.800]
and otherwise we'll talk to you soon so

[00:58:08.900]
take care bye bye