top of page

Click the logo above to read Dr. Chicotsky's articles on dividend investing, IPOs, technology, and capital market analysis at Seeking Alpha.

Read more of Dr. Chicotsky's writings at the blog site ...

Menu
The Future of the Internet's Anatomy

The Future of the Internet's Anatomy

(2014)

This past March the World Wide Web turned 25. Back in 1989, when Sir Tim Berners-Lee first proposed the coding for Hypertext Transfer Protocol (HTTP) and Hypertext Markup Language (HTML), he helped usher in the Information Age. His software gave us a common framework for interacting with each other and exchanging documents from remote locations anywhere across this massive network of networks — the Information Superhighway, as it was called in the 90s. It enabled the creation of the World Wide Web using extant Internet transmission protocols (TCP/IP) developed in the 70s by Robert Kahn and Vinton Cerf, and before long it would become the architectural backbone of the modern Internet. 
 
What no one could have predicted then, however, is just how rapidly user culture and performance demands would evolve beyond the Internet’s original design parameters. For more than 25 years, the IP-based Internet has proven remarkably resilient in spite of increasingly sophisticated security attacks and complex content delivery requirements. But looking forward into a future where ubiquitous network computing appears destined to become a cornerstone of the average person’s way of life, technologists generally agree that it’s time to explore new Internet architectures altogether. Computer scientists are attempting to redesign the Internet from the ground up using wholly new paradigms unbiased by present design assumptions.
 
Just what that architecture will look like is uncertain at this point, and it’s even more uncertain how soon a well-tested “clean slate” model could be widely implemented. But thanks to an ambitious National Science Foundation (NSF) initiative, mirroring the efforts of other interested groups worldwide (like FIRE and AKARI), we’re beginning to catch a glimpse of some of the major design trajectories that will characterize the Future Internet.
 
One of the foremost design concerns is security, which exists in the present Internet only as an overlay. When Internet Protocol was first developed, its designers were not thinking about a global network environment where vulnerabilities would be easily exploited by malicious intruders; they were simply creating a connectionless communication mechanism that would allow a source host to exchange information in a uniform manner with a destination host across any possible configuration of intervening networks. Since IP functions as the minimum network service assumption (or “narrow waist”) of the Web, the current Internet is intrinsically concerned with host location and packet transfer, not data integrity or host authenticity (check out this 2008 CPNI security assessment for some technical details). 
 
Security must be addressed at the fringes of the network, with applications that scrutinize data before it is sent and after it is received. Future Internet designers see this as a big problem. They want to make security a more basic feature of data transmission, envisioning a network infrastructure where self-identifying data packets are examined and verified at multiple points en route to their destinations instead of only at the endpoints. Among other things, this would permit intermediate network devices like routers and bridges to perform security tasks that IP components are not designed to handle.
 
One NSF-funded project, termed eXpressive Internet Architecture, makes this kind of security the centerpiece of its design approach. XIA functions by defining various communication entities (or principals) and then specifying minimum rules for communicating with each, effectively making the “narrow waist” of the Internet far more pervasive and customized according to the type of interaction taking place. Hosts, content, and services are the three main types of principals, but XIA allows for the creation of other principal types to accommodate future communication needs. Principals talk to each other using 160-bit expressive identifiers (or XIDs) that employ cryptographic hash technology and function as self-certifying, intrinsically trustworthy identification. (For a point of reference, and as proof of how resilient this type of data handling is, consider how this is very similar in concept to the way Bitcoin transactions can be publicly validated in an anonymous peer-to-peer network.) Tampering with communications across a XIA-like Internet would be extremely difficult, to say the least.
 
Several other areas of interest are represented in other NSF-funded projects. These concern aspects of Internet use for which IP is simply not very efficient. The Named Data Networking (NDN) project, for instance, re-imagines the client-server model of network interaction, recognizing that today’s predominant uses of the Internet center on content creation and dissemination rather than simple end-to-end exchanges. In place of a protocol that asks “where” on the network a piece of content can be found, NDN proposes one that asks “what” the user is looking for. It then retrieves the content from whatever the nearest “container” might be (such as a router’s cached memory). With NDN, data become first-class communication entities, and the “narrow waist” of the Web is centered over content type instead of host location. IP names the location; NDN names the data. As with XIA, data transmitted across the NDN Internet are intrinsically secure, because data names contain “integrity signatures” demonstrating their provenance and rendering them trustworthy without regard to where they presently reside on the network. Successfully implemented, an NDN Internet would therefore make content creation and dissemination vastly more efficient and worry-free than it is today.
 
Researchers working on the NEBULA project, meanwhile, are operating from the assumption that the present trend of migrating data and applications into the cloud (“nebula”, by the way, is Latin for “cloud”) will only continue. They envision an infrastructure where cloud computing data centers form the highly reliable and highly secure Internet backbone, and users access and compute directly across a network replete with applications and content in a “utility-like” manner — that is, whenever, wherever, and however they choose. This would represent a significant departure from the present Internet, where the cloud still represents an aggregation of endpoints on a client-server architecture. With NEBULA, the Internet would truly be the cloud, not merely function like one. The difference is subtle, but it is highly consequential from an engineering standpoint.
 
In a similar vein, researchers working on the MobilityFirst project assume that the Internet should be centered not around interconnecting fixed endpoints like servers and hosts, but around interconnecting users themselves, who are increasingly relying upon mobile devices and wireless networking to dynamically access the Internet. Their first principal is that mobility and wireless are the new norms for the Internet, and so a major thrust of their research is to design an architecture that facilitates robust communications in spite of frequent connections and disconnections from the network. The MobilityFirst paradigm includes a decentralized naming system, allowing for a more flexible identification system on the network by eliminating IP’s effective “global root of trust” in the DNS name resolution service. Similar to other architectures already mentioned, MobilityFirst relies on cryptography — in this case to provide a way for a name certification service to validate user addresses without relying on a centralized authority, creating an intrinsically secure and simultaneously more efficient way for mobile users to connect to the Internet on-demand.
 
This is a ton of information to digest all at once, and it’s indicative of the many efforts taking place to revamp the Internet. We’ve only scratched the surface on the complexity of what all these researchers are working on. We can already see that the Future Internet will be based on wholly new paradigms for how data is named, processed, stored, and secured across a global network of networks. All of these concepts are currently being tested on NSF’s Global Environment for Network Innovations (GENI), and it will be exciting to see how these design trajectories come together during the next decade. By the time the Web turns 40, it may very well have a whole new anatomy.
- Brandon Chicotsky (2014)
Understanding Bitcoin Growth

Understanding Bitcoin Growth

(2014)

Reporters working tech and financial beats are paying close attention to the relatively new digital currency known as Bitcoin. There is a notable frenzy taking place over the online trading asset. In the month of November alone, Bitcoin’s value appreciated more than 500 percent on the popular exchange known as Mt. Gox, topping out at more than $1,000. Early adopters and financial speculators are becoming overnight millionaires in what mainstream media outlets have begun to call a digital gold rush.
 
Meanwhile, startups like Coinbase and KnCMiner are raking in huge profits by supplying hardware and software solutions necessary for users to get in on the Bitcoin “gold rush”. This is clearly a game-changing development in global finance, but many don't understand the meaning of what Bitcoin is and how the market of currency trade is impacted.
 
It’s time to get up to speed.
 
The precise details of how the protocol works are considerably complex, but the basic concept is both simple and elegant. Bitcoin is a semi-anonymous digital currency that operates by means of cryptography and a decentralized peer-to-peer payment network. Trust is generated in the Bitcoin system through open-source transparency, irreversible transactions, and payment processing via global network consensus.
 
Individual Bitcoins exist as blocks of encrypted electronic signatures. Anytime a Bitcoin changes hands, the new owner’s electronic signature is added to the block, and the transaction is recorded and confirmed by the Bitcoin network in an encrypted public ledger called the block chain. This ledger, which can be manipulated by anyone with the specialized computing hardware necessary to do so, contains a permanent record of every Bitcoin transaction that has ever taken place.
 
Consumers acquire Bitcoin by one of three methods. The most obvious options are to receive coins as payment for goods and services, or else to purchase them directly from an exchange such as Mt. Gox or Coinbase. But entrepreneurial-minded technologists can also mine new Bitcoin for themselves. “Mining” is the term used for processing transactions on the Bitcoin network. This endeavor requires specialized computer hardware to handle the processing power needed to decrypt and modify the block chain. In exchange for the use of CPU power to verify transactions, the system rewards users with freshly minted Bitcoin of their own. This is the means by which new Bitcoin enter circulation and the whole system is designed to cap off at twenty-one million.
 
Individual Bitcoin owners make payments by means of wallets, which generally take the form of smartphone apps or computer programs. Each wallet has a public address and a private key. The public address functions much like an email address, which the wallet owner can share in order to receive payments. The private key, on the other hand, functions more like a PIN number, allowing the wallet owner to authorize the release of Bitcoin to another user’s wallet.
 
Although anyone may own Bitcoin currency, nobody owns the Bitcoin network. This is both its greatest strength and, arguably, its most controversial facet. As with any open source software, Bitcoin depends upon the consensus and personal investment of its own users to generate trust and motivate wider acceptance by retailers and state governments. More than 250 retailers, including such popular online services as WordPress and CheapAir.com, banded together over the holiday weekend for Bitcoin Black Friday in an effort to increase awareness of the currency and expedite its wider adoption by other retailers.
 
From a vendor’s perspective, the Bitcoin network provides an alternative to the hefty transaction fees associated with handling traditional currencies through banks and credit card companies, allowing them to price their goods lower while still realizing substantial profits. In a recent Senate hearing, U.S. government officials also expressed their receptiveness to the new currency. This is noteworthy, considering the FBI’s recent shutdown of the Bitcoin black market known as Silk Road and growing concerns over the attractiveness of Bitcoin for illegal transactions involving drugs, weapons and international terrorism.
 
It’s too soon to tell whether the digital cryptocurrency will live up to its promises over the long haul. Many still believe this boom is too good to be true. It’s exciting to watch what happens in the coming months as the system matures and market forces begin to level. At its present rate of growth, Bitcoin could gain sufficient clout to compete for a place of prominence in international commerce, albeit controversially. And that means we could very well be watching economic history in the making. 
 
-Brandon Chicotsky (2014)
The Value of Fifteen Minutes of Fame

The Value of Fifteen Minutes of Fame

(2013)

According to an estimate by social psychologist Orville G. Brim, some four million Americans admit to having an intense desire to become famous. Children’s media researcher Yalda Uhls also found that in 2007, fame was the No. 1 value communicated to preteen TV audiences. This growing fascination with celebrity culture presents a market opportunity for public relations specialists. It also helps explain why the employment market for PR professionals is expected to grow at a faster-than-average rate between now and 2020.
 
Scholars like Mathieu Deflem at the University of South Carolina are helping to pioneer an area of academic specialty centered on the questions of fame and celebrity. Dr. Deflem’s most popular course focuses on the fame of Lady Gaga. As more people aspire for fame online, in the arts, in sports and through television programming, the public relations community will grow even more.
 
But just what is fame in the first place? Strictly speaking, the words “fame” and “celebrity” don’t mean the same thing. Fame deals with how a person’s name and accomplishments are esteemed and remembered by others over time. It’s not a black-or-white state of being.
 
Celebrity, on the other hand, has to do with the widespread recognition of a person’s name and face that comes from mass-media coverage. It’s possible to be well known and well respected (hence famous in some sense) without ever attracting the kind of media coverage necessary to propel an individual into the celebrity category. It’s also possible to become a celebrity by getting the attention of a large audience without doing anything of special merit to “earn” that fame. Such overnight celebrities emerge in a world chock-full of digital media, reality TV programming and social media.
 
Researchers are probing the question of permanence. Is today’s fame really just the fleeting, 15-minute phenomenon that artist Andy Warhol described everyone would experience, or is it something more enduring?
 
This question matters when considering an individual’s name brand online. A brand is associated with value, especially when considering associating a brand with solving a pain point in the marketplace or offering sales through consumer interest. In a media-saturated world where it’s apparently easier than ever to enter the public eye, is fame becoming less significant and more transient than in the past? Not according to some of the most recent findings on this topic.
 
Challenging common assumptions, an article recently published in the American Sociological Review describes how being part of the public conversation for long enough causes a person’s name to lock in and become a relatively permanent fixture. The business implication of this is that reality television stars, Internet stars and other locked-in celebrities who seemingly offer arbitrary monetary value may offer significant endorsement power for your product or service.
 
For all the questions that remain on the subject, one certainty exists: Celebrity culture is a mainstay, and it’s growing. As long as the public continues to demand a never-ending string of celebrity icons to adore, there will continue to be a viable market for those who know how to provide them.
 
This creates an opportunity for those interested in participating in the celebrity economy to offer tailored services. It’s also a growing market for product placement and brand association with specific demographic targeting. Overall, demystifying the mechanics of fame ought to be a top interest for anyone aiming to make a lasting impact on the emerging marketplace of celebrity followership.
 
For more, visit: HowToBuildFame.com 
 
- Brandon Chicotsky (2013)
The Case Against Against 'Wonder Boys'

The Case Against 'Wonder Boys'​​

(2013)

Many startup founders are too concerned with how much financing they can raise. There's a “wonder boy” trend taking place after early-stage financing. Instead, there should be an increased focus on founders who develop an offering that addresses genuine consumer needs. Inadvertently, venture capital firms are misdirecting attention to strong fundraisers, instead of drawing attention to founders who are mastering the art of execution (building a company from an idea and generating cash-positive growth). 
 

VC's operate from a profit model that accounts for the under-performance or failure of the majority of companies they take on. It's a law-of-averages approach; only one out of a variable number will provide the 5x to 10x(+) returns desired (accomplished in several ways: IPO, multiple rounds of capital injections, or a boom in revenue).

 

VC decisions may reflect favorable due diligence, and media stories reporting on VC deal-flow are often interpreted as an early promise of success. However, substantial seed round and series-A funding does not determine the marketplace’s buying behavior. Early rounds of capital may not warrant the “wonder boy” attention to founders, either. Talent in fundraising is often different from talent in execution. Founders should also be validated by their execution skills and ability to pivot, which is usually performed after testing a user base to determine a new course of action. 

 

The nature of VC business is such that even the most prudent investor cannot accurately predict which companies have long-term viability. Several factors must be considered:

 

  • Certain market segments excite investors more than others, and this excitement may trend differently from year to year. Last year, for instance, software and clean tech startups garnered significant funding from VC's (according to data published in the 2013 NVCA Yearbook). VC decision-makers are often encouraged to invest in these key markets by their stakeholders — the limited partners who actually provide the money.
  • Sometimes an investor is more interested in a company's speculative value than in its long-term viability. Such a firm might be willing to fund a risky startup in a “hot” market in hopes of liquidating its shares after early financing rounds.
  • Investors are people, too. Sometimes their favorable and highly subjective impressions of company founders and their teams can tip the scales of optimism, prompting a big bet on an untested idea.

 

Do not equate investor confidence with customer validation. Nearly three-quarters of all venture-backed startups in the U.S. fail to live up to investor expectations. Even a cursory review of VC industry statistics should caution would-be entrepreneurs against wooing investors before courting prospective customers. Nevertheless, too many startups allow these investment concerns to distract them from honing their product.

 

Consider a new idea-to-market strategy altogether. What if prospective startup founders focused less on the question of how to attract early funding for their ideas and instead diverted more attention to understanding the real “pain points” of their markets? Instead of asking, “How can I persuade an investor to give me a shot at this business model?” they should ask, “What problems are consumers trying to solve in their lives, and what kind of product would they be willing to pay for to address these problems?”

 

Breaking further from the “wonder boy” trend, we suggest that bootstrapping (self-funding) is undervalued and under-appreciated when validating prospective startup success. Such an approach may enable a company to expand at a strategic pace that reflects user acceptance and market education — not simply an investor's timetable for an exit strategy. Self-capitalizing growth may also help founders avoid inherent distractions prompted by public funding announcements.  Some examples of distractions are:

 

  • Over-exposure in the media may entice competitors;
  • An elevated profile of the company founders may detract from normal work-flow; and
  • An influx of hiring may force the company to grow at an unsustainable pace (i.e., a high burn rate may force the company to perpetually raise more and more capital).

 

Self-capitalization has its own set of risks and considerations, and one shouldn’t completely rule out VC funding, especially after a strong customer base has been established. For rapid growth and satisfying exceptionally high user demand, VC funding can push a product to market quicker than most funding methods (considering alternatives like angel funding, group equity funding, crowd-funding or a pre-product company acquisition). 

 

Many successful startups validate their ideas in the market before pursuing large injections of capital. They spend less time trying to direct their users and more time learning from them. They brand themselves not merely as brilliant innovators, but as pioneers of real solutions to real problems. Marc Nager, CEO of Startup Weekend, summarizes this point well: “Founders should keep in mind that their team is more valuable than their idea, [that] their customer is more important than their product, and that sustainability is greater than funding.”

 
-Brandon Chicotsky & Kristen Carson (2013)
Mobile Payments Poised to Change Shopping

​Mobile Payments Poised to Change Shopping​​

(2013)

Smart phones have arrived – for nearly the entire world. As the newly elected Pope Francis emerged for the first time to greet his eager audience, one observant photojournalist captured this telling photo, poignantly demonstrating mobile devices’ ubiquitous currency among the world's citizens.

 

One of the more exciting signs of this trend is the recent proliferation of mobile payment systems. Even a cursory review of the companies listed at Mobile Payments Today reveals that an entirely new market has developed around transforming financial interactions between retailers and mobile consumers.
 
A decline in credit cards?

 

Consider 3-year-old startup Square, whose remarkable
plug-in card reader allows small-business owners to use
their smart phones and tablet computers as sophisticated
point-of-sale systems for an increasingly cashless society.
Babysitters and food truck operators, private tutors
and mom-and-pop storefront owners alike can now accept
credit and debit card payments via the mobile
technology they already use every day.
 
Smart phone users can select from among “mobile wallet”
apps such as Google Wallet or the feature-rich
Isis Mobile Wallet (a venture backed by AT&T, Verizon,
and T-Mobile). These products enable consumers to access
their credit and deposit accounts (plus store coupons
and loyalty program IDs) via their wireless devices —
all right at the checkout line, virtually eliminating the
need for old-fashioned credit and debit cards. These,
and other products like MasterCard PayPass, integrate
near field communication technology, permitting consumers
with compatible devices to simply swipe their phones in front of a receiver at checkout, instantly authorizing payment while simultaneously capturing and archiving transaction information for personal financial records. Person-to-person monetary exchanges no longer require cash or checks; smart phone users with the new BumpPay app simply tap their phones together and — voila! — the transaction is complete.
 
Consumer doubts on mobile payments
 
This is technology that could revolutionize the shopping landscape, softening the boundaries between online and in-person sales transactions while opening new, targeted marketing avenues for retailers. But consumers are apparently not yet sold. A recent comScore study reveals that only 51 percent of U.S. consumers are even aware of digital wallet products other than PayPal (which benefits tremendously from years of strong online brand recognition), and even fewer are actually using them (less than 8 percent for all but PayPal). Industry leaders convening at a Kansas City symposium last year explained that consumers doubt the security of mobile transactions, and they don’t see any significant advantage to mobile payments.
 
Nevertheless, many remain keenly optimistic about the future of mobile payments. After a high-profile announcement that Starbucks will partner with Square to provide mobile payment options for its enormous customer base — including the 18-35 age group with especially high rates of smart phone penetration — even skeptical industry giants like VeriFone are responding to customer interest in this new frontier. And earlier this year Visa announced a deal to incorporate its payWave system into all future NFC-enabled Samsung smart phones, setting a strong precedent for closer partnership between merchant services providers and mobile technology manufacturers.
 
In January, industry observer Steven Gurley pondered the boldness of Isis' backers, who are fully aware that it will take several years before it becomes apparent whether mobile payments are a passing fad or a lasting trend. “What would possess Jim Stapleton (Isis' CSO), a Harvard grad and AT&T career veteran, to take on such a risky assignment?” he asks. “Whatever the reason(s), I don't believe that anyone with Jim's credentials would want to be forever associated with a very public flop. He must, therefore, feel very confident that Isis will succeed.”
 
This mobile ecosystem continues to diversify — both in its consumer demographics and its product demands. Companies like Square and Isis are first-to-market leaders, but as more companies emerge with comparable or superior products, there will be no shortage of buyership and users. The world is not plugged in; they’re texting, purchasing, calling and browsing in the mobile space.
 
-Brandon Chicotsky (2013)
Valuing a Startup

Valuing a Startup

(2013)

Venture valuations are critical and under-researched in entrepreneurial financing. For this reason, James Mahoney of the Federal Reserve Bank of New York wrote “New Venture Valuation by Venture Capitalists: An Integrative Approach.” Mahoney understood that guiding metrics and open-source data reports are not readily available to study successful seed-round injections. His writing inspired this piece.

 

While key factors can be identified that influence the economic value of a startup, assigning a value to a new company, pre-product or pre-revenue, is seemingly more art than science — but it shouldn’t be. The truth about valuing a startup is that it’s often an educated guess based on both objective data and subjective criteria.
 
Seed capital often describes the first round of investment into a new venture. This fundraising is commonly attributed to product research and development and organizational infrastructure (patent filings, company registrations, and hiring).
 
Seed capital can be distinguished from venture capital in that venture capital investments tend to involve significantly more money, an arm's length transaction, and much greater complexity in the contracts and corporate structure.

Where does most seed capital come from? People close to the founder(s) of the new firm – family, friends, and close business associates.

 

Whether seed or venture funding, a valuation has to be constructed and agreed upon by the investor and the company. While there is no standard percentage that a typical new venture gives up to investors to obtain seed capital, there are standard processes and accepted models that investors and entrepreneurs can use to create reasonable valuation ranges.
 
The most widely used new venture valuation method is Discounted Cash Flow Analysis. For most startups – especially those that have yet to start generating earnings – the bulk of the value rests on future potential.
 
DCF involves forecasting how much cash flow the company will produce in the future and then uses an expected rate of investment return. ERR calculates how much the future cash flow is worth.
 
According to Ben McClure, the director of Bay of Thermi Limited, an independent research and consulting firm that specializes in preparing early-stage ventures for new investment, a higher discount rate is typically applied to startups, as there is a high risk the company will fail to generate sustainable cash flows.
 
The trouble with DCF is the quality of the analysis depends on the stakeholders’ ability to forecast future market conditions and make good assumptions about long-term growth rates. A third-party analyst may improve the outcome, as vanity numbers won’t come into play. In many cases, projecting sales and earnings beyond a few years becomes a guessing game.
 
Another approach is to look at market multiples of comparable companies. Venture capital investors may prefer this approach, as it gives them a better indication of what the market is willing to pay for a company. The market multiple approach values the company against recent acquisitions of similar companies in the market.
  
Dr. H. Randall Goldsmith of Venture Capital Tools says that some investors use a “back-in method.” This approach focuses on the investor's expected rate of return on his investment. New venture investment rates of return will probably fall in the 25 percent to 50 percent annual return range depending on the progress and performance of the company.
 
To illustrate, these terms are important to understand:
 
Price to earnings ratio — P/E is a company's per-share earnings compared to its per-share price. This demonstrates a market value compared to the company’s projected or active performance.
Net earnings – Often refers to after-tax net income, which amounts to everything profited by the company in a period of time.

To explain early-stage deal-flow, assume an investor’s pre-determined expected rate of return for high-risk deals is 40 percent per year. Assuming a deal with the following variables: Price to earnings ratio of 15; projected payback in Year 5 after investment; net earnings in Year 5 of $2 million; and a $300,000 investment, according to Goldsmith the calculations would look something like this:
 
Expected total return on investment (1 + .40)5 x $300,000 = $2,258,860. 
Market valuation at time of sale: P/E 15 x $2 million net = $30 million.
Percentage of investor’s equity: $2,258,860/$30 million = 7.5%
 
In this case, the investor's percentage of ownership is based upon an anticipated future valuation of $30 million and an anticipated future return of $2,258,860
 
Venture financing can be structured to meet the unique circumstances of the company and the individuals involved. Key employee stock options can vest based on any number of factors – employee performance, company performance, longevity, or other criteria.
 
Early-stage investors can choose to diversify their risk by opting for a debt-equity mix (debt being funds loaned to the company by the investor, equity being an ownership stake in the firm) or by purchasing preferred shares, which are almost always convertible into common shares at some future date.
 
Common stock is a form of equity ownership that usually provides voting rights, also described as “ordinary” shares.
 
Preferred stock is also equity ownership with the distinction that dividends must first be paid in full to all preferred shareholders before common shareholders can be paid.
 
The recent Academy Award winning film “The Social Network” portrayed how Facebook became a business with early stage funding, along with other shenanigans.
 
While Facebook is a unique success story, it did follow a familiar pattern – early seed money from a close friend and an insider with several thousands of dollars; then, venture capital infusions in increasingly greater amounts; and finally, an Initial public offering.
 
Another example is the story of Apple Computer's early stage financing, which is seemingly fictional. In April 1976, Steve Jobs and Steve Wozniak had their first order for 50 “Apple I” computers – a fully assembled circuit board containing about 30 chips.
 
At this point, Apple had a third partner, Ronald Wayne. Facing a cash flow problem to pay for enough parts to build the units to meet their first order – each costing about $100 to build – Jobs arranged a 30-day credit deal with a local supplier, and the three Apple partners then assembled the 50 units at night in Jobs’ garage. They filled their first order in 10 days.
 
As “The Pop History Dig: Apple Computer” details, two weeks later, Wayne wasn’t so sure about what he’d gotten himself into. He decided to pull out of the venture. He was given $800 for his 10 percent stake in the company.
 
In 2012, Apple’s market value exceeded $586.8 billion, making it the most valuable corporation on the planet. Wayne's 10 percent stake in Apple would have been worth nearly $60 billion.
Naturally, company founders try to avoid Wayne’s shortsighted market perspective with high speculative valuations. Discounted Cash-Flow Analysis and Expected Rate of Return calculations will help temper overly optimistic projections, as will objective data on the specific market segment to which the company applies.
 
Ultimately, valuations are based on the ability to leverage talent, market data, organizational competency, cost analysis, ROI potential, product performance, and execution potential, among other factors pertaining to future performance.
 
Using an agreed upon formula, or method, to determine new venture valuation eliminates some of the guesswork from the subjective process of early-stage financing.
Whatever metrics or studies are applied, there’s no widespread, prescriptive method for early-stage valuations and deal-flow structuring, but there are familiar tools that should be applied.

 

 

-Brandon Chicotsky (2013)
Crowdfunding Gears Up

Crowdfunding Gears Up

(2012)

As an inductee of Upstart.com’s beta, I’m exercising a new form of investment deal-flow – crowdfunding. In April of 2012, The JOBS Act (the acronym stands for, “Jumpstart Our Business Startups”) was signed into law by President Obama. The push for this legislation was straightforward – to allow, “Internet funding portals,” to sell equity stakes in small and emerging growth companies directly to the public without formal SEC registration. 
 
The JOBS Act increases the number of non-accredited investors a non-public company may have to 500 before it’s required to register its common stock. The legislation also provides an exemption for SEC registration for certain types of small offerings subject to several conditions.
 
Such a condition includes a yearly aggregate limit each person may invest in offerings, based on the person’s net worth or income. The JOBS Act relieves, “emerging growth companies,” from full SEC disclosure requirements. The legislation also allows for, “general solicitation,” and advertising of specific kinds of private placement securities offerings. Ever seen a catchy YouTube video pitching startups to the investor community? Soon, these creative pitches will be fully legal.

This is welcomed news to entrepreneurs who need more access to working capital and more amplification opportunity among investor networks, especially online. Furthermore, they can now begin leveraging the growing number of crowdfunding platforms such as Early Shares and Fundable.  However, as of today, crowdfunding sites cannot yet offer equity stakes to investors through the provisions of the JOBS Act. The SEC and the Financial Industry Regulatory Authority (FINRA), have yet to issue the regulations required to implement the new law. 
 
The SEC was given until January 2013 to issue the new rules, but it’s widely anticipated they will miss the deadline. FINRA is inching forward, deciding in early December 2012 to issue, “an interim form,” enabling investment platforms to behave as though the JOBS Act were installed, as long as the amount of activity is limited and filed before deal-flow begins.

The delay in implementation seems centered on the potential for fraud. Given the recent history of financial disasters in the U.S., the 2008 meltdown, and The Great Recession, concern runs high that crowdfunding sites might quickly become a haven for abuse. Will crowdfunding become the twenty-first century equivalent of a boiler room? Will every, “get rich quick,” scheme be promoted on crowdfunding sites by disreputable distributors? No Federal regulator wants to be blamed for media reports of con men running amok.

Before negative hypotheticals prevent a new market wave, “we should take a deep breath and use some perspective,” says Forbes magazine. The new law describes protective measures. For example, the JOBS Act limits the amount of crowdfunding investments to a small portion of an individual's net worth. Once the law is implemented, there will be an SRO (self-regulatory body) in place with regulations. Though, establishing an appropriate balance between the need for an unfettered crowdfunding marketplace and investor protection will prove challenging.

To date, crowdfunding in the U.S. has largely involved fundraising for creative projects (e.g., music, books and films) and non-profits. Sites such as Kickstarter match artists with patrons for specific purposes like launching a band’s music tour or a choreographer’s costume funding. Patrons receive rewards, “money for goods” (e.g., entrant tickets or signed memorabilia) in exchange for funding.

Sites like Upstart, Pave, MicroVentures and Wahooly are gearing up to meet an expected four billion dollar (or higher) American demand for equity crowdfunding in 2013. In the UK, sites like Abundance Generation, Seedrs, Bank to the Future and Crowd Cube are existing vehicles for equity crowdfunding, where it’s already legal. 

Equity crowdfunding supporters include Google, Steve Case (founder of AOL), Mitch Kapor (founder of Lotus) and many other accredited investors and entrepreneurs. There is no doubt American seed round and growth-stage businesses are starved for working capital. Currently, available lending sources are not meeting demand.

Equity crowdfunding does not solely center on the latest tech fad. Traditional small businesses will benefit as well. According to the U.S. Small Business Administration, small businesses employ half of all private sector employees and pay 40% of the total U.S. private payroll. Small businesses also produced 65% of new jobs over the past 17 years. Equity crowdfunding could provide an attractive financing opportunity for a number of these companies whose financial needs often fall outside the guidelines of traditional lenders.
 
According to the St. Louis Business Journal, SEC and FINRA equity crowdfunding regulations may not be in place until the summer of 2013, or later.  Until then, potential internet funding portals will have to pivot their platforms and bide time until the windfall hits from the JOBS Act. Crowdfunding is closer than ever to becoming an American reality. In many communities, including Upstart.com, it’s already arrived.

 

-Brandon Chicotsky (2012)
Detailing The Law School Bubble

Detailing The Law School Bubble

(2012)

 
Traditional wisdom about lucrative, stable career options includes such titles as “doctor” and “lawyer,” but not necessarily “startup engineer” or “executive vice-president.” Many of the nation's wealthiest individuals, in fact, are startup founders, finance moguls and corporate executives, but that doesn't seem to stop students from flocking to J.D. programs in search of long-term financial stability.

 

There is hard data to back this trend. Matt Leichter, an attorney who closely follows what has come to be known as the “Law School Bubble,”shares the graph below (per The AmLaw Daily), indicating how applications to law schools tend to increase during periods of high unemployment and decrease as economic conditions improve.


 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
It's a trend that may reflect applicants' hopes that a J.D. and admission to the state bar will provide access to the storied remunerative potential of an attorney's $112,760 median pay. And who can blame these students for riding out the present economic turbulence by investing in such a highly-respected, and seemingly valuable, professional credential?
 
It would seem the law of supply and demand does. Data from the Association for Legal Professionals (NALP) demonstrates that the average salary of legal career professionals for the class of 2011 is closer to $73,984, with better than half of the 18,630 reported salaries ranging from $40,000 to $65,000. And this figure is down from $77,333 in 2010 and $85,198 in 2009. This is a clear indication that starting salaries for law school graduates are on the decline. With over 1.2 million attorneys in the U.S.—or roughly one for every 260 citizens (see Leichter's 2010 research) is it possible that the employment market for law school grads is over-saturated? Are the graduation and bar admission rates for new attorneys rapidly outpacing job availability?
 
We pointedly asked if there’s a law school bubble to a recent Southern Methodist University law school graduate who now owns a private law practice in the Dallas-Fort Worth area. She responded, “When I went into law school, I would have said ‘no,’ but by the time I graduated, things had changed. During an economic downturn, not as many people pursue legal matters that aren't 'necessary' so the bigger firms that handle those types of cases don't hire as many attorneys. Government freezes and reduced funding for assistance programs also lead to fewer jobs.” The BLS job outlook for lawyers reaffirms this point by stating, “During recessions, demand declines for some discretionary legal services…Corporations are less likely to litigate cases when declining sales and profits restrict their budgets.”
 
The result is that many law school grads must look for work in occupations that make only marginal use of their expertise—or none at all. Erin Binns, director for career planning at Marquette University Law School, admits that successful grads must creatively deal with rejections and frustrations by casting their nets wide for employment. In the current issue of The Student Lawyer, she tells the story of three grads who landed their jobs by doing such things as applying for positions for which a J.D. was neither required nor preferred, accepting unpaid or part-time work for professional networking opportunities, and going above and beyond traditional expectations of their coursework and job requirements to distinguish themselves from the crowd.
 
Such things are hallmarks of those who succeed during an economic downturn in any industry, of course, but law students may face more pressure than most to find well-paying jobs after graduation. “Countless J.D.'s are paying their bills with jobs that have nothing to do with the law,” reports David Segal of the New York Times, “and they are losing ground on their debt every day…Studies find that most law students need to earn around $65,000 a year to get the upper hand on their debt.”
 
This, indeed, is the central problem of the “Law School Bubble.” For years, law schools—more than 200 of them—have been steadily flooding the U.S. marketplace with eager young lawyers who enter the workforce with ballooning tuition debt only to find their prospects aren't nearly so rosy as originally predicted. And others are beginning to take notice of this trend. Jordan Weissmann of The Atlantic believes the bubble is beginning to deflate, as recent numbers indicate law school enrollments are slowly declining from previous years—though he also expresses concern that, ironically, it seems those students with the strongest competitive potential among their peers are the ones opting not to enroll. Is there another educational course such bright minds might pursue instead?
 
Business Masters programs have traditionally been strong competitors for liberal arts graduates who might otherwise pursue a J.D., but there is a persisting rumor that, while business school might open the door for “truly astronomical earnings,” the legal profession is a better route to “comfortable, affluent stability” (as quoted from this article at the US News University Connection). There’s a perception that the overall need for lawyers will always outpace the need for top-level business executives, whose salaries are almost entirely dependent upon changing economic conditions and the whims of consumer demand. But does this perception hold true in an employment market where attorneys are minted faster than job openings?
 
There is also the issue of prestige. A law degree, it seems, is still a badge of achievement for many, one that is more tangibly linked to success in the legal profession than even a distinguished business degree is to success in the business world. After all, you do not need a degree to open your own business, but without a law degree you will never be able to open a private law practice. However, in an employment market where J.D.'s are happily settling for jobs as paralegals and part-time research associates, does one with a business Masters who accepts a job as an executive assistant really carry any less prestige? Upward mobility in both professions seems linked more to an applicant's social resources and creative goal-achievement strategies than to their educational credentials. Moreover, one could argue that what the business degree lacks in status it more than makes up for in flexibility and long-term earning potential.
 
Perhaps this is why an increasing number of students are considering dual-track J.D./M.B.A. programs that immerse students in both disciplines. At least one such graduate recognizes, “School is the means, not the end…Don't go into school without a plan. Talk to lawyers. Talk to law students. Talk to business students. Have a very clear post-grad goal in mind, even if it changes.”
 
The lesson of the “Law School Bubble” is that too many liberal arts grads are piling into the legal career lifeboat without adequately counting the cost. Professional credentials may serve a social status goal, but when push comes to shove, the bottom line is what really counts—and the “bottom line” is something business school students know plenty about.
 
- Brandon Chicotsky and David L. Zion (2012)
Theoretical Science in Business

​Theoretical Science in Business

(2012)

 
Theoretical science explores what is possible, what explains and what often unifies diverse elements of thought. Where an applied scientist tests a hypothesis under controlled conditions (such as in a laboratory), evaluates his results and then determines if they support or refute a hypothesis, a theoretical scientist creates new theories to be tested, proposes novel ways to look at data and combines disparate elements into greater wholes.
 
What in the world does theoretical science have to do with business?
​​
Two Americans, Alvin E. Roth and Lloyd S. Shapley, were awarded the Nobel Memorial Prize in Economic Science in October of this year for their work on market design and matching theory, which relate to how people and companies find and select one another in everything from marriage, to school choice, to jobs and even to organ donations.
 
Game theory is a study of strategic decision making. More formally, it is the study of mathematical models of conflict and cooperation between intelligent rational decision-makers. The subject first addressed zero-sum games, but today, game theory applies to a wide range of class relations and has developed into an umbrella term for the logical side of science. This includes both human and non-humans, like computers.
 
Chaos theory is a scientific principle describing the unpredictability of systems. Its premise is that systems sometimes reside in chaos, generating energy but without any predictability or direction. These complex systems may be weather patterns, ecosystems, water flows, anatomical functions or organizations. While these systems' chaotic behavior may appear random at first, chaotic systems can be defined by a mathematical formula, and they are not without order or finite boundaries.
 
Theoretical science is applied to business in these and many other contexts. While social systems are not as amenable as a physical phenomenon to the application of the scientific method, breakthroughs and real world applications based on theoretical science are found in abundance in the business world.
 
That said there are limitations. In the 1990s many tried to construct “magic box,” type program trading software to profitably trade securities using applied chaos theory, but these efforts had very limited success. The ability of mathematics to accurately predict human behavior remains a highly controversial topic.
 
In the past, applied research and development was concentrated at corporate labs like Bell, IBM, Xerox PARC, etc. Today, innovation based on theoretical science is more likely to be found at small venture capital backed companies founded by creative risk takers. 
Consider the history of the biotech pioneer Genentech.  The firm was founded in 1976 by Robert Swanson, a venture capitalist, and Herbert Boyer, a Nobel laureate biochemist and co-inventor of a foundational technique for genetic engineering. The founding of Genentech is significant, not only because it launched the biotechnology industry, but also because it put basic science into the organizational envelope of a for-profit firm. Genentech’s first research project, supported by funds raised from venture capitalists, investigated whether a human protein could be made in a bacterial cell. At the time, this question was a central theoretical concern in the field of biology.
 
Genentech was, in its earliest days, a pure research organization. When it went public in 1980, it had no product revenues. It was still two years away from the launch of its first product (recombinant human insulin, developed and marketed by a corporate partner Eli Lilly) and five years from the launch of its first wholly owned product. Genentech demonstrated the feasibility of being a science-based business, and it created a template for literally thousands of entrepreneurial firms and bioscience firms founded over the subsequent thirty-five years.
Albert Einstein's famous theory of relativity, E = mc2, is perhaps the most widely known example of theoretical science. Einstein created what is considered to be the world's most famous equation because he thought that Newtonian mechanics alone was no longer sufficient to explain how the universe worked.
 
One could speculate, where would Einstein work if he were alive now?  At a university or non-profit research institute, or rather would he be applying his genius at a startup firm that gave him the opportunity to pursue both basic research and create innovative products and ideas for profit? For the, “Einsteins,” of today the private sector is increasingly becoming a more attractive option.

 

-Brandon Chicotsky (2012)
Incubator Engineering Will Soon Feature Cooperatives

Incubator Engineering

Will Soon Feature Cooperatives

(2011)

Beyond LTD’s, LLC’s, LLP’s, DBA’s, S-Corp’s, C-Corp’s, NGO’s (B-Corp’s coming soon for social entrepreneurship) and other traditional organizational assignments, a new form of startup structure is emerging – the co-op.
 
There are nearly 30,000 cooperatives in the U.S. with more than 100 million members. Co-ops offer opportunities with government partnerships, as many are oriented in 501c-3 structures. This format is primed for tax relief. For hybrid equity structures, like College Houses Cooperatives in Austin, property taxes are dismissed due to the reinvestment structure of money flow. This creates affordable housing, satisfying many government provisions for grants. In this example, tenants are voted in by existing, paying members. The same vetting and reinvestment process can apply to business co-ops.
 
In the housing co-op model, once new tenants sign a contract, small equity is allocated to the tenant. This provides motivation for better stewardship of the physical assets in the community and motivates renters to become owners – either prompting capital improvement projects through direct investments or neighborhood involvement (to increase appreciation prospects).
 
When the appreciation index rises, in turn affecting the property value and offering capital gains for the organization, the house and organization have the opportunity to lower rent or leverage their assets for more property development. Cooperative living amenities may be comparable to high priced, plush venues. They also offer strong community with democratic involvement in major organizational decisions. Almost all co-ops involve a labor system, where all members (or equity holders) agree to share upkeep responsibility, management or collective investment risk.
 
The future of business cooperatives may spawn from incubators that under-serve its membership. Incubator development has yet to expand enough to warrant, “bubble,” discussion, but its existing model may not serve members to its full potential.
 
What’s missing? A vested interest in collective success and more contingency plans for failure among member-businesses. Here’s how to provide what’s missing…
 
- Create an investment vehicle that enables capital deployment decisions to take place among a collection of businesses sharing equity in one another.
o Each member-business would have a vote for decisions regarding allocations (in some cases enabling investors to weigh in as well).
- Similar to an incubator, careful vetting and pre-set criteria may be in place to attract and recruit certain member businesses (enabling market segmentation and focus).
- Each member business allocates a certain amount of equity to be owned and shared among other member businesses.
- When one business fails, the talent and intellectual capital may be absorbed by other member businesses or voted out as excess.
- Collaboration and skill sharing are encouraged due to collective interest in success.
- Incubator founders, which may also have equity allocations provided to them, can still perform their roles of offering capital, programming, mentorship and amplification.

 

This cooperative model helps protect founders’ investment, creates a more amicable coworking space (or “laboratory”) and better utilizes talent after failure. Incubators are increasingly used by revitalization projects in civic planning as vehicles to attract talent and spur job growth. Soon, we should see cooperatives become an investment vehicle and portfolio offering under the incubator banner.

 

-Brandon Chicotsky (2011)
Brief Evangelizing About P2P

Brief Evangelizing for P2P

(2011)

A Boost from the Crowdfunding Amendment of the JOBS Act

 

Get to know the concept, “peer to peer (P2P) lending.” It’s not too late to tap into the benefits of early bubble growth. LendingClub.com, launched in 2007, passed the $500 million mark for investments, including small borrowers and lenders. More recently they’ve surprised the market with the inclusion of hedge funds, family offices, and other heavy investors. CEO Renaud Laplanche claims $85 million of active investment management.

 

Prosper.com, the closest rival, recently landed a $150 million hedge fund commitment to the platform. They’ve crossed the $300 million loan volume validating the P2P model. By the end of 2011, the U.S. Federal Reserve estimated U.S. consumer credit was at $2.5 trillion. While P2P does not put much of a dent in providing more manageable and affordable interest rates and loan restructures, the online community platforms for lending and credit cut out banks and are gaining traction.
 
Also, the Security and Exchange Commission receives reports for each investment which has raised eyebrows in the insurance and reinsurance industry. New guarantees for repayment could be included in the P2P model soon. This bodes well for bank alternatives as consumer dissatisfaction grows.

 

Just last week, occupy Wells Fargo, a protest movement spun from the Occupy Wall Street community, shut down an investor meeting. While P2P is likely the next option for small loans and investors, high-profile credit card industry executives are joining P2P companies to guide the growth. Just don’t tell the protest movement for risk of losing momentum for P2P. 
 
Adding to the mega trend stats, last month, SocialLending.net reported for the first time ever a monthly volume for LendingClub and Prosper at $50.8 million. Both companies are maintaining an annual 100% growth rate, which absolutely merits evangelizing.
 
Below is an estimated five-day breakdown of how quickly a P2P loan can be obtained:
 
Mon – Apply for a loan on Prosper.com
Tue – Loan is active on the platform for investors
Wed – Loan is fully funded by investors
Thu – Prosper emails request for documents (driver’s license, copy of a voided check)
Fri – Phone call from Prosper verifying personal details
Fri – Loan issued
Mon – Money appears in the bank account
 
P2P offers web-savvy consumers methods to pay down credit cards, while investors have risk-adjusted returns that beat other investment products. They can choose small loan participation amounts, and investments are uncorrelated with other activity in portfolio building.
 
What are the returns? LendingClub reported an average annual return of 5.8 to 12.3 percent, depending on loan grades, while Prosper reported an ROI of 10.47 percent for its notes. As a reference, Lendstats.com offers a fascinating viewpoint and tracks returns at both companies.
 
So, will these investor communities ever get fully insured? This conversation takes strategic fundraising, political conversations and time. So far, LendingClub reports show only $6 million from two insurers, one of which has applied for a National Association of Insurance Commissioners rating for its investment. This insurance scale should grow with more customer validation and pressure from government. 
 
This is what we call an emerging market. There’s activity, steady growth and soon will be surprises and setbacks. TechStars, a New York based incubator, said it had more than 30 applications from crowdfunding startups for its summer 2012 class, and The JOBS ACT will spread the investment pool to thousands more.
 
Last month’s Pebble E-Paper Watch for the iPhone, which raised over $6 million on Kickstarter (a new record), proves the impact of crowdfunding. While this isn’t in the same model of P2P, it demonstrates collaborative consumption’s customer validation process. Eric Ries, an Austin resident, alludes to customer validation in his book, The Lean Startup.
 
This is the process by which businesses can raise funds or obtain purchases before ordering manufacturing or aggressively pursuing a series round of financing. This goes beyond the expenses of office space and secretaries, which are rarely associated for stratups, and delves into validation methods for assessing company or product viability.
 
Overall, P2P funding and crowdsourcing are rising. Rather than dwell or labor too intensely in drafting proof, engage the space yourself with a safe, low buy-in. After your returns, you may join me in participating with gusto.
 
- Brandon Chicotsky (2011)
Migrating Business to the Cloud

​Migrating Business to the Cloud:

Precautions and Infrastructure Needs​

(2011)

2012 has been the year of transitioning to the cloud, even though it’s a considerably old internet infrastructure model for businesses. Most executives understand that web space (or cloud space) is not a new invention, but it has new relevance. Looking back, store products showcased in physical boxes with software inside ready for manual downloading were impractical and wasteful delivery methods. The cloud could have provided immediate access and purchases to such software. The same principle applies to internal businesses seeking to streamline processes with data usage.

 

Retailers slowed the market growth of crowdsourcing, network sharing, and software cloud storage. The infrastructure burden is no longer shifted to the consumer. It’s now mostly managed by the server or host. This is a sign of progress, streamlining and internet efficiency. Security protections have matured for server support systems, which give data heavy businesses more reason to migrate to the cloud.

 

Instead of purchasing additional processors as needs grow, you can now purchase more processing power using a host’s hardware. Ever wondered why Jeff Bezos and the Amazon team began purchasing rural land in your region? They are contributors to the new market of web space hosting. Their servers power the web, like many other providers. The burden of hardware has been minimized, and this can cheapen operations. With sound purchases, your business makes your personnel more easily and more quickly able to eliminate different data types, without ever having to acquire hardware or allowing hardware to sit idle.

 

The recommendations and fact points below are based on our exposure to IT departments transitioning or migrating to cloud based software systems. Part of our research as graduate students focusing on technology management and entrepreneurship was to define the processes and systems involved in this migration process.

 

Two new technologies are converging, the result of which promises to make businesses that require access to data more productive and profitable: cloud computing and data mining.

 

Cloud computing is the delivery of computing as a service rather than as a product. Computing resources are maintained offsite in the same way utilities such as electricity, Internet service and telephone service are.

 

Data mining, as the name implies, is the ability to ascertain desired information from a set of data. Today, it is possible to enable the extraction of knowledge from large, complex data sets—and this is essentially what data mining entails.

 

Before exploring these two technologies and how businesses will benefit from them, it’s important to understand the current, typical state of affairs for data-dependent businesses.

 

Firstly, infrastructure is Key.

 

Businesses that offer software as a service such as Sales Force, for example, typically warehouse disparate types of data. Typically, information such as name, company, title and address is housed concurrently in a single database. Customers log in, and either enter new data, or purge and query existing data through a user interface provided in a web browser over any Internet connection.

 

This requires hardware and software infrastructure that is expensive to maintain, difficult to ensure uptime, and requires safeguards against corruption. As each customer’s database grows ever larger, such companies must continually expand the infrastructure in the form of things like additional software licenses and servers.

 

Some companies also have data subscriptions such as litigation and financial data that customers can access through a search engine to perform things like e-discovery, or business intelligence research. In those cases the data is typically growing, which further compounds the need for a growing infrastructure and can complicate otherwise linear business decisions.

 

In other words, for some businesses, infrastructure growth is directly proportional to business growth. For those that rely on data subscriptions, the equation has many more variables. In both cases, though, data must be first managed, and then mined. This is best possible through cloud integration (cloud computing).

 

The key benefit of cloud computing is something called elasticity. Elasticity is the concept of on-demand provisioning, which enables cloud computing subscribers to access additional computing resources as they are needed.  A conundrum faced by IT departments is to understand the future need of hardware and software infrastructure.  They must be able to plan for and procure what’s needed according to a forecasted schedule.

 

At the beginning of the fiscal year, for example, HR and executive management typically require a forecast from hiring managers regarding the number of employees they will hire and when they will hire them. IT can then submit a budget requirement and schedule for hardware and software purchases. Cloud computing can eliminate or diminish this requirement.

 

This capability becomes exceedingly valuable to data-reliant businesses. Engineering and IT personnel can reduce planning efforts and resources, and focus more on providing the right infrastructure for individual internal consumers. Migration to a cloud computing environment also yields an additional benefit for data-reliant companies: the ability to dynamically diminish computing resources when they are not needed.

 

Infrastructure requires a non-trivial capital and human investment. Someone has to buy it, someone has to install it, and someone has to maintain it. The intrinsic flexibility of cloud computing translates to flexibility for both engineering and external customers. If data mining from a new data set is temporarily required, additional infrastructure that will afterward be wasted isn’t necessary.

 

All of this assists in both business planning and execution, through simplification and reduced cost. This is important for understanding the cloud’s potential, but what about upgrading systems for data usage specifically?

 

Data mining requires a few considerations before migration from a traditional IT environment to a computing cloud. First, it’s important to understand that you can’t simply flip a switch and dump your data directly into the cloud. Second, it’s important to understand that an orderly migration effort is important.

 

Cloud computing introduces security risks that don’t exist when data is behind your own firewall and your migration plan must account for that reality. There are numerous companies that are well vetted to establish cloud security and monitor your system, which should be researched and accounted in the transition expenses.

 

Data behind your firewall is subject to two concerns: 1) privacy for each customer from all other customers, and 2) external threats. In a cloud, although you no longer retain the responsibility for external threats, you retain the requirement of protecting customer privacy—and you introduce an additional risk.

 

You must also contend with preventing the data from being sniffed or compromised as it is transported from the cloud to its destination. Encryption therefore becomes a critical concern.  This typically goes hand-in-hand with another concern: cost optimization.

 

You want your data to have the smallest footprint possible because a bigger footprint translates to greater computing resources which increases cost. Data compression therefore is highly desirable. What’s needed is a solid set of encryption and compression algorithms.

 

Data-reliant companies typically also have data hygiene requirements. It’s a simple fact: data is dirty.  It is often mis-formatted, incomplete and inaccurate.  The data cleansing and verification processes and algorithms already in place are much more important in a cloud than behind your firewall. Because of the extra transportation step from your environment to the cloud, you must know that you have clean data from the start. Pre-transfer and post-transfer testing is critical, and additional attention to monitoring is also necessary.

 

Cloud migration could be adopted almost universally by businesses that rely on data.  Businesses that don’t migrate will lose to competitors who adopt the cloud because the cost of infrastructure maintenance behind the firewall is much greater than operating in a cloud environment. However, that doesn’t mean cloud migration should be attempted rashly. It requires thoughtful planning that involves both engineering and IT.

 

For example, you should plan to migrate, “mission critical,” operations last to reduce the probability of business disruptions. Infrastructure should be maintained throughout the migration to ensure that it can be rolled back if necessary. Also, careful documentation and testing should be performed during the early stages to ensure mission critical operations are not disrupted during the final phase.

 

- Brandon Chicotsky & David L. Zion (2011)
A (Not So New) Cloud

A (Not So New) Cloud

(2011)

One of Steve Jobs’ last heroics was the announcement and launch of Apple’s iCloud, which allows mp3’s and other Apple applications to be accessed on the internet without taking up space on individual computer hardware. Software engineers and user interface designers have been privy to cloud based computing for years, but now it’s the broader market’s turn to enter the world of, “the cloud.”
 
The cloud is a type of software that stores your data (emails, pics, and documents) and runs in your web browser on the Internet. Its basic name is, “Software as a Service (SaaS),” and is more commonly known as cloud-based computing. So, why is the cloud so significant?
 
For freelance and entrepreneurial software engineers, the boom in cloud services has been a game changer. Soon, the average internet user will feel this boom, if they haven’t already. Several years ago, all programs you used had to be installed and upgraded, which took up space on our hard drives and couldn’t be accessed on other computers without re-installing.
 
Now, almost all software is becoming fashioned for international and indiscriminate-computer access. A common attribute of this cloud boom is that most of the software is accessible without cost! For software coding entrepreneurs and businesses, this creates massive opportunity as, “open source,” applications fuel a collaborative market trend (Wikipedia is, in part, a result of this open source free access).
 
Aside from the emergence of collaborative consumption business models (Examples: Hadoop, Grooveshark, SalesForce and SurveyMonkey), data, information, and software are becoming more accessible for coders, internet enthusiasts, and web designers. Some cloud sources are sophisticated enough to create profile login entries, whereby users can access organized and specific software to apply to their projects.
 
Cloud-based computing is an important trend to monitor to understand where our internet consumer products are headed and where investment deal-flow is directed. As we witnessed with one of Steve Jobs’ last performances, the market is moving toward a (not so new) dynamic in the internet space – the cloud.
 
- Brandon Chicotsky (2011)
Conversations with T. Boone Pickens
Dr. Brandon Chicotsy

Conversations with T. Boone Pickens

(2010)

After recently attending the 2011 Clean Energy Venture Summit, I’ve become inspired by our alma mater’s role in green technology innovation and the surrounding entrepreneurial economy. A week prior to attending this summit at The University of Texas at Austin’s AT&T Conference Center, the Texas Tribune hosted a policy festival at the same location with notable speakers such as, T. Boone Pickens. Like many alumni, I’m proud to learn UT is becoming a center point for energy policy discussions and innovative strategies to develop

efficient energy use for our nation’s future.

 

At the Clean Energy Summit, we learned our university has developed 3,000 strains of algae for companies to pilot test energy extraction potential. The Defense Advanced Research Projects Agency (DARPA) is also sponsoring a project to develop jet fuel, known as JP-8, for military use from biological sources. This involves $25 million in sourced funding.

 
As university biologists and engineers engage in this research and development, it’s inevitable, “gap fuel,” discussions will ensue. Gap fuels are what T. Boone Pickens described at the Texas Tribune Festival as necessary fuel to bridge the gap from archaic and heavily pollutant energy sources (oil and coal) to fully non-pollutant energy sources. For Pickens, a welcomed Oklahoman onto UT’s campus, he described natural gas as gap fuel touting three benefits, “It’s cheaper than current energy sources, more abundant and domestic.”
 
Hydraulic fracking, a water-intensive drilling process that fractures rock to release natural gases, was a subject of debate at the Texas Tribune Festival. Pickens, along with U.S. Senator John Cornyn at the policy festival, disputed data reports and environmental activist claims of fracking’s (supposed) cause for minor earthquakes and a detriment to water aquifers. While the Texas Tribune Festival admirably debated the issues and policies around fracking, the Clean Energy Summit showcased companies with missions aimed to make fracking safer, more cost effective and more resource efficient.
 
It’s fitting our university is a partner in coalescing thought leaders in public policy, energy industry, and innovative technology. These are areas of intellectualism, analysis, and entrepreneurship that shape our future, alter our markets and affect our livelihoods. Every McCombs alumni and fellow Longhorn can stand proud knowing our campus attracts gatherings that change the world, but we should never stand idle. Let’s give, participate and offer our resources to ensure UT remains a leader in these critical energy and technology advances.
 
- Brandon Chicotsky (2010)
The Vacuum of Downtime for an Information Baby

The Vacuum of Downtime for an Information Baby

(2010)

My generation may not know who to thank for the capitalization of computer micro-miniaturization advances. Such advances have spawned broadly accessible and affordable personal computer products and internet communication technology. The commercialization and marketing campaigns using this technology have inspired stable interfaces for sharing information and developing captivating applications in what we commonly refer to as, ‘social media’.
 
Whether it’s the Defense Advanced Research Projects Agency (DARPA), British scientist Tim Berners-Lee (“world wide web” creator), Bill Gates, Michael Dell, Steve Jobs, Tom Anderson, Mark Zuckerberg, or Larry Page and Sergey Brin, my generation owes significant credit to the aforementioned innovators and a host of others for creating a new vacuum of downtime.
 
Admittedly, my entertainment or whimsical creation is often channeled toward interactive technology, rapid global communication, or information sharing. This begs the question, what does an ‘information baby’ do with spare time? I’ll tell you exactly what we do…we play and inundate our minds.
 
Put yourself in the mindset of a Baby Boomer. For those who belong to this demographic, I invite sentimental reminiscing. Large-scale commercial products like: erector sets, tinker toys, color television sets, flying miniature helicopters, high-powered telescopes, easy to engineer bottle rockets, remote control toy cars, electric battery lights, citizens’ band (CB) radio, stand-up speakers, four track players, eight track players, and the Texas Instruments’ hand held calculator (remember the $100 and $200 pricing?)…these items among other simple rotary machine toys and sound projecting devices made up the bulk of technology play toys for Baby Boomers.
 
The downtime of young students in the Baby Boomer era had less connectivity with citizens of other nation-states. There were no visual labyrinths of online linking, coding, and embedding. There were few opportunities to move 2D, 3D, or 4D images in real time through virtual simulation, and few opportunities to map out graphic and informational architecture on a well pixilated and well-illuminated screen.
 
Information was spread orally, through physical publications, through code transfers connected by electronic wiring, through rough video and audio tracking, or physical transfer of ink typed or written imaging or language. Today, all modes of communication from the Baby Boomer era remain in practice, as do a multitude of expedited processes that captivate, easily commercialize, digitize, and connect the world at rapid speed.
 
Today, my colleagues in graduate school likely read over 3,000 words of content in a business journal or online news publication before attending lecture each day. It’s estimated the front page of the New York Times has roughly 2,200 words, all of which can be obtained on a smartphone with a few scrolls of the thumb. For us multitaskers, we may enjoy a television news program (they now stream online), which delivers 150-200 words per minute while reading our online publication and fielding SMS (text message) alerts from the Wall Street Journal. This is a simple example of information inundation for news enthusiasts or habitual news readers (I’m one of them).
 
On average, I have about eight browsers open at once. Most browsers involve news, social media, search engine entries, music applications, or online radio or podcast broadcasting. The audio feeds outside of music involve information output or idea sharing from industry savants or respected authors. Imagine what this does to a mind in repeated daily sequence over a course of several years of formative growth. Almost all colleagues of mine in graduate school have developed these habits. 
 
While writing this blog entry, I have a radio segment playing in the background, which I helped develop with other information babies. As an example of how we spent our downtime, and to draw a distinct contrast to the recreation involving technology from generations prior, I’ll explain…
 
While employed with menial tasks (entering data on excel and drafting formulas to determine industry trends and company revenue projections) I found extra time to correspond online with other uninspired young professionals to build an online radio show. We used free software, which we downloaded on sites known as, “open source,” and embedded template applications (also obtained through open source) on another larger template website design. With a little bit of ‘photoshop’ (a term used as a verb, which describes a program that enables graphic imaging and photo editing), we filled in the template design with our customized imaging. With the open source software, open source application, open source web site design, and ‘photoshopped’ imaging, we had a presentable site with functionality.
 
The site’s goal was to offer a 24/7 radio show and feature humorous video content specific to our industry of interest [either produced by us, by hired interns, or obtained through other online social mediums (like YouTube)]. We took this content and embedded the videos and pictures on our site. We hired interns out of our local universities (respective to our place of work) to broadcast 24/7, covering news content, conduct interviews, and offer entertaining segments around the clock. The radio show would broadcast from the site and be accessible worldwide with a simple internet connection. Finally, we embedded a donation application or, “button,” on the site so listeners have the opportunity to express their support for the show.
 
This entire operation has proven successful (respective to our goals), and the show remains on air and has been running for months. Our expenses are completely offset by listener contributions, and most notably, we’re all extraordinarily entertained during any waking hour.
 
Perhaps we were venting from the lack of entrepreneurial freedom in our jobs, or maybe we sensed our generational advantage in developing online endeavors. One confidant is a full-time and profitable online Texas Hold ‘Em player and the other an up-and-coming hardware engineer with a knack for affiliate marketing schemes and basic HTML coding. Bottom line, we represent a breed of plugged-in, online intuitive thinkers, who utilize popular software and internet tools at our disposal to inundate or minds, communicate abroad, and build enterprises via the internet.
 
Sometimes, these endeavors become markedly profitable for our friends. Other times, as in the case I described with our radio show, only our small network and industry fans enjoy the fruits of our labor. Social media is mostly where we recruit listeners, and those who tune in are loyal listeners and contributors.
 
Through various chat mediums (skype messaging, facebook chat, gmail chat, “tweets” on twitter, online forum threads, and Microsoft Instant Messenger), we were able to brainstorm several ideas and delegate tasks. Meanwhile, each of us maintained our various projects, managed our jobs’ deadlines and browsed news, engaged social media, listened to audio feeds, and learned new coding methods through search engine entries.
 
I must qualify this behavior by saying each confidant involved has developed skill-sets that function outside the online arena, as have most information babies. My confidants and I are sufficient public speakers, have respectable interpersonal communication and organizational skills, and we are enthusiastic dressers of business professional or occasional golf casual.
 
We learned these skills in the last feudal system in the United States – college campus education; a place where tenured professorships provide timeless insight. I promise, ageless principles touted by Dale Carnegie and Napoleon Hill have not fallen on deaf ears. I refer these skills because they belong to the majority of college educated members of my generation, especially those who attended The University of Texas.
 
Information babies generally understand the importance of maintaining relationships, building tangible plans, visualizing success, and creating lasting influence as leaders in varied industry positions. All the while, information babies often have other functions or facets of excellence. We’re superb multi-taskers, we retain and regurgitate information at unprecedented levels, we’re exposed to hundreds of thousands of impressions (images on the computer screen) during an average week; whether advertising, news articles, flash media, website architecture, data, interactive maps, search engine results, interface application systems, or communication technology.
 
What’s most notable is that information babies are now available on the market to join your companies, start their own entrepreneurial endeavors, and educate the next generation of youths. In our early years, most of our parents taught us to utilize our time wisely, develop a high emotional intelligence, and seek mentors. However, it’s likely none of our parents told us to spend our downtime engaging a community of friends through social media tools or inundating our minds with information through search engines and varied online news-feed portals. The reason these mandates were not pressed upon us is because the vacuum of downtime described above never existed until now.
 
-Brandon Chicotsky (2010)
Cowtown's Tax Collection

Cowtown’s Tax Collection

(2009)

My hometown, Fort Worth, accounts for tax revenues through a City Operating Budget. The accounts are grouped into what’s known as, Enterprise Funds, General Funds, Internal Service Funds, and Special Funds.
All city services (sanitation works, airports, water works) are supported by Enterprise Funds.
 
Generally, user charges pay for operations. In recent years, Fort Worth has pursued a noble goal - eliminating all forms of subsidization to utility enterprise funds. This means, no one is using my tax dollars to pay for mismanagement or budget shortfalls - I highly value this form of conservative governance.  
 
Every tax dollar paid to the municipality initially goes to the General Fund before money is allocated outward. The Internal Service Funds finance the services and goods of each department (equipment, technology solutions, office services, and temporary labor). The Special Fund serves to track revenue and expenses. This fund helps identify excess and lapses while providing a safeguard for any budget oversight.
 
The city also accounts for grants. For example, in 2006, the U.S. Department of Housing and Urban Development granted the city of Fort Worth over $11 million for community development, emergency shelters, and housing opportunities for persons with AIDS. These grants are handled through a separate budget, which means Fort Worth’s leadership must never become complacent in managing its resources.
 
Near my family’s retail center on 7th street (Chicotsky’s Center) we have enjoyed new, highly valued commercial and condominium development, which is taking shape throughout the Historic District and downtown area. New development contributes to keeping a low tax rate for property owners. As long as housing developers consider affordability and market forces to avoid over-saturation we can avoid high tax rates.
 
Fort Worth’s property tax rate has changed several times over the last few years with welcomed tax cuts, and we have proven somewhat immune to the aftermath of the 2008 market dive. Currently, we enjoy the lowest tax rate in the city since 1986. As long as we have an increase in construction, an increase in single-family homes, and smart assessments by the Tarrant Appraisal District (provides current value assessments), we can keep our tax rate competitive.
 
-Brandon Chicotsky (2009)
bottom of page