Wednesday, January 14, 2015
According to numerous media reports, Facebook have just (soft) launched their new business oriented "Facebook at Work" application. Given Facebook's ubiquity in the consumer social network space, the Facebook@Work announcement was met with great fanfare, with most observers suggesting that the new offering is aimed at competing with products like Yammer, Hipchat, Slack, Salesforce Chatter and others of that ilk. This raises a number of interesting questions, at least some of which we can't fully answer yet, as Facebook@Work is still in a "closed" beta at the moment. But we can take a look at some of the issues around this new product, and its implications for the business oriented social network space.
A few important questions stand out to us, including the topics already raised by some media pundits, like "can you trust Facebook with your data?" and "how will Facebook monetize this?" These are valid concerns, and I'm sure they will be addressed in time. But right now, I'd like to start by asking perhaps the most pertinent question of all:
"Is Facebook at Work really an Enterprise Social Network?"
At first blush this may seem like a ridiculous question - you may think "Well, Facebook is a social network, and if it's aimed at business then of course it's an Enterprise Social Network." But this is a superficial and possibly inaccurate analysis. I think we can safely say that Facebook@Work is definitively a business social network, but whether or not it's an enterprise social network is a different question altogether.
To explore this in more detail, let's talk about what constitutes an "enterprise" or what qualifies as enterprise software. You could argue that any business is an "enterprise" by some definitions, but in terms of the technology industry, "enterprise" has more specific connotations. In the tech software industry vernacular, "enterprise" generally refers to companies that are large or complex enough to have very specific demands on their software systems, and "enterprise software" is software which is designed and built specifically to serve the needs of those firms.
Enterprise software typically has to meet specific requirements in terms of reliability, interoperability, extensibility, and the other "ilities" as people often call them. When a complex firm embeds business logic, which constitutes some part of their competitive advantage, into a software system, the system has to be tailored to the exact specifications of that customer, or it provides no real advantage at all. Likewise, software that does not meet minimum thresholds for uptime, response time, ability to integrate with other enterprise systems, etc., is often not suited for enterprise use. Examples of required integration points may include Customer Relationship Management (CRM) systems, Sales Force Automation (SFA) systems, Enterprise Resource Planning (ERP) platforms, Business Process Management (BPM) products, etc.
Given that, what can we say about Facebook@Work? Well, details are somewhat lacking at the moment, but what has been reported to date suggests that Facebook@Work lacks any integration capabilities, including even the standard Facebook developer API. And there is certainly nothing to suggest any purpose built integration points for connecting to CRM, SFA, ERP, or BPM systems. It also appears unlikely that Facebook@Work will support any kind of customization to speak of. It seems that this will be strictly a hosted offering, run by Facebook themselves, and not a product where customers will have access to the source code, with the ability to run a modified version. In terms of reliability, however, Facebook@Work should do well, assuming it inherits the same support staff and procedures that the consumer Facebook is backed by. When you look at it, Facebook is very reliable in general, given the massive amount of traffic the site serves.
Based on what we know so far, I'd hesitate to call Facebook@Work an "enterprise" product. I think it will serve well as a replacement for Hipchat, Slack, Yammer and similar tools in the SMB space.. Companies up to 250 employees or thereabouts, with limited needs to customize or integrate their software, will possibly find Facebook@Work very useful.
On the other hand, we believe that firms much larger than about 250 employees - and certainly those with more than 500 employees - will have needs that will not be served by Facebook@Work. This is, of course, based only on the information available today.
For firms that need a social platform which was purpose built for enterprise scenarios, and features API support for ActivityStrea.ms, FOAF, BPM integration, business events (SOA/ESB integration), and which is completely customizable, we suggest taking a look at our Open Source Enterprise Social Network offering, Quoddy. In addition to strong API support and a business friendly Apache License, Quoddy includes support for serving as a cornerstone of a Enterprise Knowledge Network by including support for Semantic Web technologies and pre-built integration with Apache Stanbol for semantic concept extraction and content enhancement.
At the end of the day, Facebook@Work is an exciting development, and we believe it will serve the needs of many - but not all - business customers. Luckily there are a large number of choices in the enterprise software space, with solutions available to fit all types of firms. Whether a firm adopts Facebook@Work, Yammer, Quoddy, Slack, or "other" we firmly believe that Enterprise Social Software is going to serve as an important channel for business collaboration and knowledge transfer for the foreseeable future.
If you read this far, you should Follow us on Twitter
Saturday, September 13, 2014
So, you've been following our blog, reading our tweets, friend'ing us on Facebook, reading our posts on Hacker News, and you've circled us on Google+, LinkedIn with us, and probably even driven by our homes a few times. Wouldn't you like to stop stalking from afar and actually, you know, come meet us??
Guess what? Now's your time! We will be demo'ing in the "demo room" at this year's CED Tech Venture Conference in Raleigh, North Carolina, Tuesday and Wednesday of next week (that's Sept 16 and 17, 2014). And even better, you'll have not one, but two great chances to meet us in October! Phil will be presenting at the 2014 All Things Open conference in Raleigh, which happens October 22-23, 2014. Phil will also be speaking at the Triangle Java User's Group meeting in October, on the evening of October 20th.
Of course all three of these events would be amazing even if we weren't there, and we encourage you to attend all three, even if you have never heard of Fogbeam Labs. But if you do visit, be sure to find us and say hi!
Wednesday, August 27, 2014
I (Phil) have recently been asked to speak on a panel discussing Open Source Software and issues regarding intellectual property, OSS licensing, patents, and how recent changes have affected the Open Source world, etc. This makes sense, given that everything we do at Fogbeam Labs is Open Source, and we make participating in the OSS community part of our mission and core values. But I'm no legal expert, and there's plenty I don't know about the legal issues in this sphere, and there are licenses that I don't know much about (esp. the lesser used ones). So I decided to do some "boning up" on the topic in advance, and remembered that there are quite a few resources dedicated to this topic, which are themselves "open source" (or at least freely available).
So, I thought I'd throw together a list quickly, which may be useful to anyone who wants to get an overview of what this "Open Source" thing is all about, or who wants to deepen their understanding of OSS licenses and related topics.
First, we have the absolutely classic The Cathedral and the Bazaar by Eric S. Raymond. This book deals with the fundamental dichotomy between how software is produced in the decentralized, distributed "Open Source" model, and how it is produced in a rigid, top-down, bureaucratic organization (like most software companies). Note that the linked page includes both the text of the book (including foreign language translations) and comments by the author and links to other discussions and comments by other observers.
Fundamentally, if you want to understand the Open Source world and the mindset of the people who populate it, this is required reading. No, not everybody agrees with everything esr has to say, and yes, this book is somewhat dated now. But it has been so amazingly influential that it's become part of the very fabric of this movement.
Next up we have Understanding Open Source and Free Software Licensing by Andrew M. St. Laurent. This book focuses specifically on OSS and Free Software licenses, and includes a comprehensive analysis / explanation of all of the important and widely used licenses that you will encounter. If you have ever wondered "what do the mean when they say that the GPL is 'viral'" or "what's the problem with mixing code that's released under different licenses" or something similar, this is your book. It's not a law textbook, but it covers the legalities and legal implications of OSS licensing for laymen quite well.
Another excellent title covering the legal nuts and bolts of Open Source licensing is Open Source Licensing: Software Freedom and Intellectual Property Law by Lawrence Rosen. Rosen has been a high profile participant in legal aspects of Open Source for years, and has written a great book to help people understand the interaction of law and software. This book and the aforementioned Understanding Open Source and Free Software Licensing collectively cover pretty much everything you could want to know about licensing and legal issues (to the extent that such a thing is possible. There is still a lack of case-law and legal clarity in certain areas).
Another excellent book, especially for those leading - or who would lead - Open Source projects is Karl Fogel's Producing Open Source Software: How to Run a Successful Free Software Project or "Producing OSS" as it's known. "Producing OSS" covers the nuts and bolts of running an Open Source project and actually shipping software. Surprisingly (or perhaps not so surprisingly) there is a lot more to running a successful project than dealing with code and tech issues. Karl's book deals with the various "soft" issues that projects face - dealing with volunteers, creating a meritocracy, understanding how money affects the project, etc. I highly recommend this book to anyone who is, or wants to be, an active participant in any Open Source community.
And last, but certainly not least, we have the Architecture of Open Source Applications series. In these two books, the creators of dozens of popular Open Source projects explain the inner workings of their projects, and reveal the architectural details that made them successful. If you value learning via emulation, this is an amazing series of case studies to learn from.
And there you have it folks - a virtual cornucopia of Open Source wisdom collected over the years. If you have ever wanted to develop a solid understanding of how Open Source works and what it's all about, this is a great place to kick off your journey. And, of course, feel free to post any questions or comments here.
Thursday, February 13, 2014
Earlier today I read an interesting article at Tech Crunch by Peter Levine, in which he asserts that "there will never be another Red Hat" and more or less lambasts the notion of a company based on Open Source.
We are a company based on Open Source.
So, I guess my first thought should have been "Oh, shit. We're doing this all wrong. Let's yank all of our repositories off of GitHub and close everything immediately."
The truth is, Peter makes an interesting point or two in his article, and some of what he says at the end is moderately insightful. In fact, it reflects some decisions we made a few months ago about how we're going to position some new product offerings in 2014. But nothing in his article really provides any support for the idea that there is one, and only one, successful "Open Source Company".
OK, to be fair, I'll take his word that Red Hat is the only public company who's primary foundation is Open Source. But I'll counter that by pointing out that "going public" is not the sole measure of success for a firm. I'll also grant you that even Red Hat, seemingly the most successful "Open Source company" to date, is much smaller than Microsoft, Oracle, and Amazon.com
Guess what? Almost every company is much smaller than Microsoft, Oracle and Amazon.com. Comparing a company to those outliers is hardly damning them. Truth is, RH is an $11 billion company - nothing to sneeze at. And yes, we have been known, on occasion, to use the phrase "the next Red Hat" when trying to describe to people what we're out to do here at Fogbeam.
Let's look at something else while we're at it... Red Hat are hardly the only successful Open Source company in the world anyway. They are probably the biggest and the most well known, but stop and consider a few other names you may have heard: Alfreco, Jaspersoft, Bonitasoft, SugarCRM, Cloudera, Hortonworks, Pivotal, Pentaho... Yeah, you get the drift.
And then there's this jewel of a quote from the article:
If you’re lucky and have a super-successful open source project, maybe a large company will pay you a few bucks for one-time support, or ask you to build a “shim” or a “foo” or a “bar.” Unless I'm misinterpreting Peter here, he seems to be suggesting that companies do not want to, or are not willing to, pay for support for the Open Source solutions they use. All I can say is that this does not match my experience at all. Oh, don't get me wrong... there will always be some percentage of "freeloaders" who use the OSS code and never buy a support subscription. Red Hat know that, and we know that. But what we also know is that most businesses that are using a product for a mission critical purpose want a vendor behind the product, and they are willing to pay for that (as long as the value is there). The fact is, companies want to know that if a system breaks, there is somebody to call who will provide support with an SLA. They want to know that if they need training, there is somebody to call to provide that training. They want to know that if professional services are need for integrations or customizations, that there is somebody that they can call, who knows their shit. And, more prosaically, they want to know that there is a company there to sue if the shit really hits the fan.
So when I read Peter's article, I really don't hear a strong argument that there can't be other successful Open Source companies. In fact, I can't help but think that all he's really saying is "It's hard to build an Open Source company that will generate returns at a scale, and in a timeframe, that's compatible with the goals of Andreesen-Horowitz." And that's a perfectly fine thing to say. Maybe an Open Source company would be a bad investment for A16Z. But that isn't even close to the same thing as suggesting that you can't be successful using Open Source - if your goals and success criteria are different.
Anyway, as far as the whole "next Red Hat" thing goes - the thing is, we don't actually aspire to be "the next Red Hat". We've just used that term because it's a simplification and it's illustrative. But as far as aspirations for where we are going? Nah... In fact, here's the thing. We aren't out to be "the next Microsoft" either. Or "the next IBM". or "the next Oracle" or "the next Amazon.com" and so on and so on, ad infinitum.
No, fuck all that. Our aspirations are far bigger than that. Wait, did I say "bigger"? Maybe I really just meant "different". Bigger isn't always better, and there are other ways to distinguish yourself besides size. Will be be an $11 billion dollar company one day? I don't know. Maybe we'll actually be a $221 billion dollar company. Maybe we'll be a $2 million dollar company. Maybe we'll never make a dime at all.
What I do know is that our plan is this: We are working to build a company that is so fucking awesome that in a few years, people doing startups will go to people and say "We plan to be the next Fogbeam Labs"...
Thursday, February 6, 2014
Over at BPM.com forums Peter Schooff has posed a very interesting question: "What Is the Key to Solving the Social Aspect of BPM?" This is a topic we've thought a lot about, and "social BPM" is very core to use here at Fogbeam Labs, so I wanted to take a moment and share some thoughts on this very important topic.
The discussion here is focused around this factoid from a recent Aberdeen survey:
Thirty-four percent (34%) of respondents in Aberdeen’s Solving Collaboration Challenges with Social ERP indicated that they have difficulty converting collaborative data into business execution. This is unnerving because, for many processes, the ability for people working together collaboratively is essential for process effectiveness.
To really understand this, you have to consider what exactly the collaborative aspects of a BPM process are. And, in truth, many processes (perhaps most) are inherently collaborative, even if the collaborative aspect is not explicitly encoded into a BPMN2 diagram. Think of any time you've been involved in a process of some sort (whether BPM software or workflow engines were involved or not) and you have to make a decision or take some action... and you needed information or input from someone else first. If you picked up a telephone and made a call, or sent an email or an IM, then you are doing "social BPM" whether you use the term or not.
The first factors then, in really taking advantage of collaboration in BPM, are the exact same things involved in fostering collaboration in any fashion. It's not really a technology issue, it's an issue of culture, organization design, and incentives. Do people in your organization fundamentally trust each other? Is information shared widely or hoarded? Does the DNA of your firm encourage intra-firm competition between staff members, or widespread collaboration which puts the good of the firm first? Sadly, in too many firms the culture is simply inherently not collaborative, and nothing you do in terms of BPM process design, or deploying of "enterprise social software" or BPM technology is going to fix your broken culture.
Next, we have to look at these question: Does your firm actually empower individual employees to make decisions and use their judgment? Can an employee deviate from the process? No? Well what if the process is broken? Can your staff "route around" badly designed process steps, involve other people as necessary, inject new information, reroute tasks and otherwise take initiative? If the answers to most or all of these questions are "no" then you aren't going to have collaborative processes. If your organization is a rigid, top-down hierarchy that embraces a strict "command and control" philosophy, you're never going to get optimal effect from encouraging people to collaborate on BPM processes - or anything else.
It's only once you have the cultural and structural issues taken care of that technology even comes into play. Can some BPM software do more than others to encourage and facilitate social collaboration? Absolutely. That's why we are developing our Social BPM offering with specific capabilities that help cultivate knowledge sharing and collaboration. Using semantic web technology to tie context to tasks and content (where "context" includes things like "Bob in France is the expert on this topic and here's his contact info"), and exploiting "weak ties" and Social Network Analysis to provide suggested sources for consultation, are crucial technical capabilities for making BPM more "social". Additionally, if you have the cultural and structural alignment in place to really foster collaboration and knowledge sharing, then enterprise social software are amazingly powerful tools for cultivating knowledge transfer, fostering engagement, and driving alignment throughout your organization.
Done well, combining social software and BPM can provide tremendous benefits. But no technology is going to help if your culture is wrong. If you're having trouble with collaboration, I strongly encourage you to examine the "soft" issues before you spend a dime on additional technological tooling.
Wednesday, November 13, 2013
There is an interesting article at Gigaom right now, by Dominiek ter Heide of Bottlenose in which the author asserts that the Semantic Web has failed, and purports to give the three reasons why it has failed.
This is, of course, utter bullocks. I want to take this opportunity to explain why and provide the counterpoint to Dominiek's piece.
For starters, there is simply no legitimate basis for saying that "the Semantic Web has failed" to begin with. Given that his initial assertion is flat out wrong, there's almost no reason for a point-by-point rebuttal to the rest of his piece, but we'll work our way through it anyway, as the process may be educational.
So, if I'm going to say that the Semantic Web has not failed, then how might I substantiate or justify that claim? OK, easy enough... you probably use the Semantic Web every. single. day. And so do most of your friends. You just don't know it. And that is kind of the point. The Semantic Web isn't something that's really meant for end users to interact with directly. The essence of the Semantic Web is to enable machine readable data with explicitly defined semantics. Doing that allows the machines to do a better job of helping the humans do whatever it is they are trying to do. A typical user could easily use an application backed by the Semantic Web without ever knowing about it.
And here's the thing - they do. I said before that you probably use the Semantic Web every day. You might have thought "Yeah, right Phil, no way do I use anything like that". Well, if you use Google , Yahoo, or Bing, then guess what - you're using the Semantic Web. Have you seen those Google Rich Snippets around things like results for restaurants, etc.? That is powered by the Semantic Web. Aside: For the sake of this article, I treat RDFa, Microdata, Microformats, RDF/XML, JSON-LD, etc., as being functionally equivalent, as the distinction is not relevant to the overall point I'm making.
I could stop here and say that we've already proven that Dominiek ter Heide is wrong, but let's dig a little deeper.
The first reason that Dominiek gives reduces to an argument that everything on the Semantic Web is "obsolete knowledge" or Obsoledge.
This has the effect of making the shelf-life of knowledge shorter and shorter. Alvin Toffler has – in his seminal book Revolutionary Wealth – coined the term Obsoledge to refer to this increase of obsolete knowledge.This is simply a factually incorrect view of the Semantic Web. Again, the goal of the Semantic Web is to provide machine readable, defined semantics along with data on the web. It does not matter one bit if that data is as old as a reference to Leonardo Da Vinci or as recent as a reference to last night's episode of Grimm. The Semantic Web is just as relevant to the kind of up-to-date, trending data that Dominiek seems so obsessed with, as it is with "historical" data. And let also point out that history remains amazingly important - as the old saw goes "Those who fail to learn from the past are doomed to repeat it". To suggest that knowledge lose all value simply because it is old is simply absurd.
If we want to create a web of data we need to expand our definition of knowledge to go beyond obsolete knowledge and geeky factoids. I really don’t care what Leonardo DaVinci’s height was or which Nobel prize winners were born before 1945. I care about how other people feel about last night’s Breaking Bad series finale.
His second argument simply states that "Documents are dead". I could just point out that both this blog post, which you are currently reading, Faithful Reader, as well as his own article at Gigaom, are both "documents". You do the math.
It goes deeper than that, however. His argument, again, fails for extremely obvious reasons which betray a total misunderstanding of the Semantic Web and the state of the Web in general. His argument is that "now" data is encapsulated in tweets and other "streaming", social-media, real-time data sources. While it is a fair point that more and more data is being passed around in tweets and their ilk, the factually incorrect part is to claim that those sources are not valid components of the Semantic Web just like everything else on the web. Case in point: One of our products here at Fogbeam Labs (Neddick), consumes data from all of: RSS feeds, IMAP email accounts, AND Twitter, and performs semantic concept extraction on all of those various data sources (and more are coming, including G+, Facebook, LinkedIn, etc.) and we can find the connections between, say, a Tweet and a related blog post! That's the power of the Semantic Web, and the point that Mr. ter Heide seems to be missing.
His final argument is that "Information should be pushed, not pulled". Again, this betrays a complete misunderstanding of the Semantic Web. The knowledge extracted from Semantic Web sources can be used in either "push" or "pull" modalities. Again, one of our products can leverage Semantic Web data to generate real-time alerts using Email, XMPP, or HTTP POST, based on identifying a relevant bit of knowledge in a piece of content - whether that piece of content is a Tweet, a real-time Business Event extracted from a SOA/ESB backbone, or a Blog post.
Nearing the end of this piece, let me just say that the Semantic Web is becoming more and more important with every passing day. As tools like Apache Stanbol for automating the process of extracting rich semantics from unstructured data mature and become better and more widely available, the number of applications for explicit semantics is just going to mushroom.
To finish up, let's look at a quick example of what I'm talking about... let's say you have deployed our Enterprise Social Network - Quoddy and your company does something with musicians. Your Quoddy status update messages occasionally mention, say, Jon Bon Jovi, Bob Marley, Richard Marx, and Madonna. How would you do a search without SW tech that says "show me all posts that mention musicians"? Not gonna happen. But by using Stanbol for semantic extraction and storing that knowledge in a triplestore, we can make that kind of query trivial.
It gets better though... Stanbol comes "out of the box" with the ability to dereference entities that are in DBPedia and other knowledge bases, which is cool enough in it's own right... but you can also easily add local knowledge and your own custom enhancement engines. So now entities that are meaningful only in your local domain (part numbers, SKUs, customer numbers, employee ID numbers, whatever) can be semantically interlinked and queried as part of the overall knowledge graph.
Hell, I'd go so far as to say that Apache Stanbol (along with Apache OpenNLP and a few related projects... Apache UIMA, Apache Clerezza, and Apache Marmotta, etc.) might just be the most important open source project around right now. And nobody has heard of it. Again, the Semantic Web is largely not something that the average end user needs to know or think about. But they'll benefit from the capabilities that semantic tech brings to the table.
At the end of the day, the Semantic Web is just a step on the road to having something like the Star Trek Computer or a widely available and ubiquitous IBM Watson. Saying that the Semantic Web has failed is to ignore all of the facts and deny reality.