Wednesday, December 17, 2008
Tuesday, December 9, 2008
Metalearning Book Available
We are pleased to announce that, after much work, the book:
Metalearning: Applications to Data Mining
co-authored by Christophe and 3 of his colleagues (Pavel Brazdil, Carlos Soares and Ricardo Vilalta) is now available from Springer.
See http://www.springer.com/978-3-540-73262-4 for details.
Metalearning: Applications to Data Mining
co-authored by Christophe and 3 of his colleagues (Pavel Brazdil, Carlos Soares and Ricardo Vilalta) is now available from Springer.
See http://www.springer.com/978-3-540-73262-4 for details.
Friday, October 10, 2008
Information Pathways in Social Networks
The first talk presented in the social network session of KDD 2008 was for an interesting paper by G. Kossinets, J. Kleinberg, and D. Watts titled The Structure of Information Pathways in a Social Communication Network (PDF). Although I was not at KDD I was able to watch it online at videolectures.net.
Kleinberg, the presenter, made some interesting observations having to do with our "rhythmic" everyday conversations. The approach to analyzing communication within these social networks is focused on the frequency of correspondence, rather than the content conveyed.
They measure "distance" between individuals by measuring the minimum time required for information to pass from one node to another. A methodology based on Lamport's work and vector clocks in the area of distributed computing.
Using this metric they are able to filter a busy network (one having edges for all communication packets) in a simplified network that contains only the edges that are minimum-delay paths between a pair of nodes. They call this simplified network view the network backbone. Below is an example of such a network (along with the caption) taken from the paper.
The nodes further outside of the center of the graph are more "out-of-date" with respect to node v, since they communicate less frequently.
I found the approach to be novel and useful. As with nearly any analysis technique, caution should be used in selecting the time-period and group size to be studied. Recency and frequency issues come into play as correspondence is aggregated. However, this pursuit offers another approach for more fully understanding information flow.
Originally published by Matt on his blog at: http://dmine.blogspot.com
Kleinberg, the presenter, made some interesting observations having to do with our "rhythmic" everyday conversations. The approach to analyzing communication within these social networks is focused on the frequency of correspondence, rather than the content conveyed.
They measure "distance" between individuals by measuring the minimum time required for information to pass from one node to another. A methodology based on Lamport's work and vector clocks in the area of distributed computing.
Using this metric they are able to filter a busy network (one having edges for all communication packets) in a simplified network that contains only the edges that are minimum-delay paths between a pair of nodes. They call this simplified network view the network backbone. Below is an example of such a network (along with the caption) taken from the paper.
The nodes further outside of the center of the graph are more "out-of-date" with respect to node v, since they communicate less frequently.
I found the approach to be novel and useful. As with nearly any analysis technique, caution should be used in selecting the time-period and group size to be studied. Recency and frequency issues come into play as correspondence is aggregated. However, this pursuit offers another approach for more fully understanding information flow.
Originally published by Matt on his blog at: http://dmine.blogspot.com
Thursday, October 9, 2008
AMIA Competition Finalists!
Jun, Yao and Matt participated in the 2008 Data Mining Competition: Discovering Knowledge in NHANES Data, sponsored by the AMIA Knowledge Discovery and Data Mining Working Group, and were selected as one of the finalists by the judging panel. They will be presenting their results in a dedicated session of the AMIA Annual Symposium in Washington, DC, in November 2008. Congratulations!
Thursday, September 25, 2008
A Couple of Interesting Papers
Here are a couple of papers that others might also find interesting.
Title: Information-Theoretic Definition of Similarity (PDF)
Conference: ICML 1998
The paper provides a general similarity measure applicable across many domains. The authors insist that their formulation satisfies "universality" and "theoretical justification". Previous similarity measures are domain-specific. The formula is:
Title: An Introduction to Quantum Computing.
Author: Norson S. Yanofsky
The paper gives a taste of quantum computing targeted at computer science undergraduates (and even advanced high school students). Some of the (fun) basic points in Quantum Computing include the following. A quantum can exist in SEVERAL states AT THE SAME TIME (Superposition), but when it is measured, it collapses to either 0 or 1. (in the case of a 2 (qu)bit quantum computer). When two quantums are added, their magnitude can be decreased (Interference).
Title: Information-Theoretic Definition of Similarity (PDF)
Conference: ICML 1998
The paper provides a general similarity measure applicable across many domains. The authors insist that their formulation satisfies "universality" and "theoretical justification". Previous similarity measures are domain-specific. The formula is:
sim(A,B) = log P(common(A,B)) / log P(description(A,B))
where common(A,B) is a proposition that states the commonalities between A and B, and description(A,B) is a proposition that describes what A and B are.Title: An Introduction to Quantum Computing.
Author: Norson S. Yanofsky
The paper gives a taste of quantum computing targeted at computer science undergraduates (and even advanced high school students). Some of the (fun) basic points in Quantum Computing include the following. A quantum can exist in SEVERAL states AT THE SAME TIME (Superposition), but when it is measured, it collapses to either 0 or 1. (in the case of a 2 (qu)bit quantum computer). When two quantums are added, their magnitude can be decreased (Interference).
Wednesday, April 16, 2008
Picture of our Blog
Here is a picture of the our blog provided by bscopes.com.
Here is the legend they provide to help interpret the graph. It is clear that the all of the blogs referencing our blog are not listed (due to the sparsity of data collected at bscopes). As is, the value of this graph limited to showing the number of links within each of our blog entries. Despite the current limitations, I find the idea of providing a web service that produces a visual representation of a blog interesting.
Wednesday, April 2, 2008
Recipe for Kim-Chee Fried Rice
One of our lab's fun traditions is our weekly potluck lunch. Once a week, everyone brings something from home that we put together and share for lunch. This is a great time to socialize and talk informally about our research or anything else. People typically bring left-overs, but Jun, our in-house Korean lab member, makes a point of bringing a Korean dish that he prepares for us on purpose every week. It is typically some curry dish or kim-chee. As we do not wish to keep this to ourselves, a picture of Jun's kim-chee with his recipe are found below. Enjoy!
1. Prepare for Kim-chee. (You can purchase in either Korean market or Asian Market)
2. Mix with vegetables or beef (I recommend to put chopped Onion, garlic, and Tuna)
3. Fry them with Olive oil for 4-5 minutes.
4. Done!!! (Isn't it so easy?)
1. Prepare for Kim-chee. (You can purchase in either Korean market or Asian Market)
2. Mix with vegetables or beef (I recommend to put chopped Onion, garlic, and Tuna)
3. Fry them with Olive oil for 4-5 minutes.
4. Done!!! (Isn't it so easy?)
Pictures from AAAI Symposium
Here are a few more pictures from the AAAI Social Information Processing Spring Symposium at Stanford University.
Thursday, March 27, 2008
AAAI Social Information Processing Symposium Summary
I apologize for not getting back sooner with results and thoughts from the symposium. Like I said in my previous post, Matt and I attended the AAAI 2008 Social Information Processing Symposium. Matt presented on Social Capital in the Blogosphere and it seemed to be well received by the community. They followed up on our presentation about social capital with a number of questions regarding possible actions and experiments that could be taken within our framework for measuring social capital. It furthered our opinion that the work we have done provides an intuitive way to understand a seemingly abstract topic like social capital. There is still a lot of work to be done in determining what constitutes and explicit and implicit link within the blogosphere, but we are on our way.
Several other thoughts specifically related to our research.
Several other thoughts specifically related to our research.
- Bonding links (a relationship with someone similar to you) should be easier to make than a bridging links (a relationship with someone different that you). Thus in a social network representation you should probably see a substantial amount more bonding than bridging taking place.
- There is a cost associated with forming a bonding or bridging link that we have not addressed up to this point. In general, this cost involves both the type (bonding/bridging) of link and also the individual social capital of the person you are attempting to form a link with.
- Nearly everything in the social information processing domain, when graphed, seems to follow a power law. Does individual social capital follow this distribution as well, ie. do certain individuals have much more social capital than the population at large? If so is there anyway to leverage the social capital of all the individuals found in the long tail of the graph? For example, the wisdom of the masses approach is working wonderfully in Wikipedia, where information may in some cases be more accurate than that of the so-called authorities. It's all theory, but just some ideas I've been thinking about.
- High cost, high reward. Blogging can take a lot of time, because good writing takes time. It takes substantially more time than other activities we heard about in the conference such as tagging, posting pictures or rating a product. But with the high cost comes the high rewards, as blogging has become mainstream it has become a powerful tool for advancing ideas, products, companies and careers. Ultimately we need to get some type of reward for our involvement in a social network even if it is just personal fulfillment or our activity will dwindle.
- Stanford is the most beautiful campus I've ever been on.
- We heard about a cool new project called Freebase that seems to have the potential to someday replace Wikipedia as the best source for free, open content information. It looks smooth, provides easy ways to query for information and has an awesome API. It could be the next big thing. Matt posted his thoughts about it as well at his blog.
- Gustavo Glusman presented one of the coolest social network graphs I've ever seen (the flickrverse) which can be found here.
- Meeting a wide variety of people. There was representation from both academia and industry from a variety of locations. Plenty of people were from California, but there was also representation from other parts of the United States, the UK, Germany, China, Taiwan, Switzerland and probably more that I am forgetting. It was a great group to become involved in.
- Getting to learn from those who know more than I do. Everyone had their own expertise in specific social networks and with specific ideas. I learned a lot about social networking principles that are somewhat different than those found here in the blogosphere.
Wednesday, March 26, 2008
Hello from Stanford
Matt Smith and I will be at the AAAI 2008 Spring Symposium which is being held at Stanford University from now until Friday. We are attending the Social Information Processing session and will be presenting on a paper we co-authored with Christophe entitled Social Capital in the Blogosphere: A Case Study. The morning and afternoon sessions were great and I'll give a rundown later on. Matt will be presenting in about an hour and a half so I'll post about how that went and everything else we are learning here.
Saturday, March 22, 2008
On Correlation versus Causation
Among the common mistakes made by data miners who lack training in statistics is the confusion between correlation and causation, often arising from the difference between observational studies and random controlled trials. To start with, here are some simple definitions.
In the medical domain, for example, it is critical to discover causal rather than only correlational relationships. One cannot take the risk of treating the "wrong" cause of a particular ailment. The same is also true of many other situations outside of medicine. Hence, data miners should do well to understand confounding effects and use that knowledge both in the design of the experiments they run and the conclusions they draw from experiments in general. I have been guilty of "jumping the gun" myself and reporting results that clearly ignored possible confounding effects.
Let me turn now to more mundane business applications. In reaction to a (much shorter) comment I posted about the difference between correlation and causation here, a couple of individuals reacted as follows (I reproduce some of that conversation here for completeness):
This brings up an interesting point of course. Is the matter of causation versus correlation only a philosophical one, with little bearing in practice (a little bit like the No Free Lunch Theorem is a great theoretical result but seems to have little real impact in practical applications of machine learning; but that is another discussion: here for details on this unrelated but interesting topic). Let me try to address this here a little bit (much of this is also found in my response on the above blog). The statistician (I use the term loosely, I am not a statistician myself) seeks the true cause, the one that remains valid through time. On the other hand, the (business) practitioner seeks mainly utility or applicability, which may become invalid over time but serves him/her right for some reasonable amount of time. Under this view of the world, I think it is possible to reconcile the two perspectives. Indeed, one can see that the statements "assuming that the underlying behavior being modeled does not change" and "as long as the relationship continues to hold" may be interpreted (in some way, see below) as effectively equivalent to what statisticians regard as "controlling for variables". By taking this kind of dynamic approach where the relationship (or behavior) is "continuously" monitored for validity and the action is taken only as long as that relationship holds, the user is, in effect, relieved from the problem of lurking variables. Let me illustrate on Will's example. Statisticians would indeed argue that there may be a confounding variable that explains the insurance company's finding, one that has nothing to do with playing monopoly. Will proposed one: "being a home-body". I'll continue the argument with that one. In this case, it may therefore be that there are more home-body monopoly players than not; and it is the "home-bodyness" (if such a word exists) that explains the lower risk for life insurance (and not the monopoly-playing). Now, a statistician would be right in this case, and if one had to come up with the "correct" answer and build a model that remains accurate for now AND the future, you would have to accept the statistician's approach and build your model using home-bodyness rather than monopoly playing. There is little arguing here. I think that what Will and Jaime are be getting at is that there is a way to, in some sense, side-step this issue; namely: monitor the relationship. Indeed, if I keep on looking and checking that the correlation continues to hold, then I don't care about any confounding effect. If there are none, then the correlation also manifests a causation and I am safe; if there are some confounding effects, they will become manifest over time as the observed correlation is weakened. Hence, I can choose at that time to invalidate my model. But in the meantime, it served me right, was accurate, and I did not worry about controlling anything. Going back to the example, as long as the correlation is strong, I am OK. If it turns out that it is home-bodyness that causes the lower risk, I may eventually see more and more non monopoly players with low risk who also turn out to be home-bodies. In this case the originally observed correlation will decrease telling me that I may wish to discontinue the use of my model.
The distinction may be viewed as only of philosophical interest, at least in the context of such business cases. Again, in medicine, one may have a different perspective as also pointed out by Jaime and Will. One of the drawbacks of the "correlation-driven" approach is that when the model is no longer valid (as seen by the decreasing correlation value), the practitioner has no idea what may be the cause and is thus left with no information as to where to go next. Then again, as suggested by Jaime and Will, maybe he/she does not care. From a strictly business standpoint, he/she was able to quickly build a model with high utility (even if only for a shorter period of time) instead of having to expand a lot of resources to build a "causation" model, with the risk of not doing any better as not all confounding can ever be controlled for! (In fact, there are even situations where the controlled experiments that would be necessary cannot be run; see here for a fun example).
After all is said and done, and more has been said than done :-), one should be aware of confounding effects (or Simpson's paradox), and know how to deal with them: 1) stick to strictly random controlled experiments; or 2) use observations but handle with careful and continuous monitoring.
- There is a relation of (positive) correlation between two random variables when high values of one are likely to be associated with high values of the other.
- There is a relation of cause and effect between two random variables when one is a determinant of the other.
- Causation implies correlation, but correlation does not necessarily imply causation.
- Correlation is easy to establish, causation is not.
- Random controlled trials establish causation.
- Observational studies only bring out correlation.
- Simple problems: A solves 81 out of 87 and B solves 234 out of 270
- Complex problems: A solves 192 out of 263 and B solves 55 out of 80
In the medical domain, for example, it is critical to discover causal rather than only correlational relationships. One cannot take the risk of treating the "wrong" cause of a particular ailment. The same is also true of many other situations outside of medicine. Hence, data miners should do well to understand confounding effects and use that knowledge both in the design of the experiments they run and the conclusions they draw from experiments in general. I have been guilty of "jumping the gun" myself and reporting results that clearly ignored possible confounding effects.
Let me turn now to more mundane business applications. In reaction to a (much shorter) comment I posted about the difference between correlation and causation here, a couple of individuals reacted as follows (I reproduce some of that conversation here for completeness):
- (Jaime) - I don't think that insurance companies or any other business that would use data mining would or necessarily should care about the difference between correlation and causation in factors they don't have any control. (exceptions, of course, for anything medical or legal). If they can determine that people with freckles have less car accidents, why shouldn't they offer people with freckles lower rates?
- (Will) - Jamie makes a good point. The question of correlation versus causation will be of only philosophical interest to a data mining practitioner, assuming that the underlying behavior being modeled does not change (and this will often be a safe bet). An illustration should make this subtlety clear. Suppose that insurance data indicates that people who play the board game Monopoly are better life insurance risk than people who do not. An insurance company might very well like to take advantage of such knowledge. Is their necessarily a causal arrow between these two items? No, of course not. Monopoly might not "make" someone live longer, and living longer may not "make" someone play Monpoly. Might there exist another characteristic which gives rise to both of these items (such as being a home-body who avoids death by automobile)? Yes, quite possibly. The insurance company does not care, as long as the relationship continues to hold.
This brings up an interesting point of course. Is the matter of causation versus correlation only a philosophical one, with little bearing in practice (a little bit like the No Free Lunch Theorem is a great theoretical result but seems to have little real impact in practical applications of machine learning; but that is another discussion: here for details on this unrelated but interesting topic). Let me try to address this here a little bit (much of this is also found in my response on the above blog). The statistician (I use the term loosely, I am not a statistician myself) seeks the true cause, the one that remains valid through time. On the other hand, the (business) practitioner seeks mainly utility or applicability, which may become invalid over time but serves him/her right for some reasonable amount of time. Under this view of the world, I think it is possible to reconcile the two perspectives. Indeed, one can see that the statements "assuming that the underlying behavior being modeled does not change" and "as long as the relationship continues to hold" may be interpreted (in some way, see below) as effectively equivalent to what statisticians regard as "controlling for variables". By taking this kind of dynamic approach where the relationship (or behavior) is "continuously" monitored for validity and the action is taken only as long as that relationship holds, the user is, in effect, relieved from the problem of lurking variables. Let me illustrate on Will's example. Statisticians would indeed argue that there may be a confounding variable that explains the insurance company's finding, one that has nothing to do with playing monopoly. Will proposed one: "being a home-body". I'll continue the argument with that one. In this case, it may therefore be that there are more home-body monopoly players than not; and it is the "home-bodyness" (if such a word exists) that explains the lower risk for life insurance (and not the monopoly-playing). Now, a statistician would be right in this case, and if one had to come up with the "correct" answer and build a model that remains accurate for now AND the future, you would have to accept the statistician's approach and build your model using home-bodyness rather than monopoly playing. There is little arguing here. I think that what Will and Jaime are be getting at is that there is a way to, in some sense, side-step this issue; namely: monitor the relationship. Indeed, if I keep on looking and checking that the correlation continues to hold, then I don't care about any confounding effect. If there are none, then the correlation also manifests a causation and I am safe; if there are some confounding effects, they will become manifest over time as the observed correlation is weakened. Hence, I can choose at that time to invalidate my model. But in the meantime, it served me right, was accurate, and I did not worry about controlling anything. Going back to the example, as long as the correlation is strong, I am OK. If it turns out that it is home-bodyness that causes the lower risk, I may eventually see more and more non monopoly players with low risk who also turn out to be home-bodies. In this case the originally observed correlation will decrease telling me that I may wish to discontinue the use of my model.
The distinction may be viewed as only of philosophical interest, at least in the context of such business cases. Again, in medicine, one may have a different perspective as also pointed out by Jaime and Will. One of the drawbacks of the "correlation-driven" approach is that when the model is no longer valid (as seen by the decreasing correlation value), the practitioner has no idea what may be the cause and is thus left with no information as to where to go next. Then again, as suggested by Jaime and Will, maybe he/she does not care. From a strictly business standpoint, he/she was able to quickly build a model with high utility (even if only for a shorter period of time) instead of having to expand a lot of resources to build a "causation" model, with the risk of not doing any better as not all confounding can ever be controlled for! (In fact, there are even situations where the controlled experiments that would be necessary cannot be run; see here for a fun example).
After all is said and done, and more has been said than done :-), one should be aware of confounding effects (or Simpson's paradox), and know how to deal with them: 1) stick to strictly random controlled experiments; or 2) use observations but handle with careful and continuous monitoring.
Labels:
Causation,
Correlation,
Random Controlled Trials
Wednesday, March 5, 2008
Ian Ayres' Super Crunchers Book
I recently came across Ian Ayres' book: Super Crunchers. It is a nice read. Ayres essentially makes the case for number crunching (data mining for many of us) in all aspects of business and social life. The book describes a large number of case studies where number crunching has been successfully applied (e.g., wine quality, teaching methods, medical practices, etc.), often providing answers that challenge traditional wisdom. The examples are rather compelling. Most of the studies rely exclusively on random controlled trials and the use of regression techniques. Yet, I think this is a great book for people starting in data mining or looking for good reasons to begin. (The other nice thing is that the book is very cheap: less than $20 on Amazon!). Enjoy!
Labels:
Case Studies,
Data Mining,
Ian Ayres,
Number Crunching
Spring Research Conference
The 22nd Annual Spring Research Conference for the College of Physical and Mathematical Sciences is coming up on Saturday, March 15th. The current presentation schedule for the conference can be found here. Six members of the lab will be presenting research at the conference.
- Can a Computer Learn to do Genealogy? - Stephen Ivie
- Characterizing UCI Data Sets - Jun won Lee
- An Evaluation of Name, Location, and Date Comparison Metrics for Record Linkage - Yao Huang Lin
- Building Community around a Blog - Matthew Smith
- Social Capital in the Blogosphere - Nathan Purser
- Utilizing Stacking for Feature Reduction in Graph-Based Genealogical Entity Resolution - Stephen Ivie
- Keeping it Spinning: A Background Check on Virtual Storage Providers - Anne Roach
Wednesday, February 27, 2008
Data Mining Lab -- Experience is Key
My name is Nathan Davis. I've been a member of the Data Mining Lab at BYU for 3 years now, and have had a wonderful experience. In fact, I've had a lot of great experiences, many of which have prepared me for future work and research.
With respect to work, I've had a chance to conduct real world data mining for large industry partners. In addition to learning, through experience, about the technical aspects of the data mining process, the lab has also given me an opportunity to learn about business aspects, by meeting face-to-face with industry partner representatives. Most recently we were able to meet with a Vice President of a large retail company to discuss several issues relevant to the research we conduct!
Further, the lab has provided me with great research experience. Dr. Giraud-Carrier is a tremendous academic, with a great deal of interest in his students and research assistants. Under his tutelage I've published academic papers and will soon be completing a Masters degree. I even had the opportunity to travel to the Netherlands to present at an academic conference.
Currently I'm conducting a software engineering internship with Google, and my experience in the lab is helping me to be successful. For anyone interested gaining experience that will help them succeed academically and professionally, I'd highly recommend dropping by the lab and finding out about the great experiences that await you.
With respect to work, I've had a chance to conduct real world data mining for large industry partners. In addition to learning, through experience, about the technical aspects of the data mining process, the lab has also given me an opportunity to learn about business aspects, by meeting face-to-face with industry partner representatives. Most recently we were able to meet with a Vice President of a large retail company to discuss several issues relevant to the research we conduct!
Further, the lab has provided me with great research experience. Dr. Giraud-Carrier is a tremendous academic, with a great deal of interest in his students and research assistants. Under his tutelage I've published academic papers and will soon be completing a Masters degree. I even had the opportunity to travel to the Netherlands to present at an academic conference.
Currently I'm conducting a software engineering internship with Google, and my experience in the lab is helping me to be successful. For anyone interested gaining experience that will help them succeed academically and professionally, I'd highly recommend dropping by the lab and finding out about the great experiences that await you.
Tuesday, February 26, 2008
10 Reasons Why Data Mining is Fun and Rewarding
1. You can train your computer to do things you can't.
2. The methods are complicated, but the applications are intuitive.
3. It can save/make lots of money.
4. Data mining has applications in nearly any area you can think of.
5. You get to deal with data sets larger than you could ever process in your mind.
6. There are big developments taking place in the industry.
7. Data mining algorithms attempt to model how things work in biology and the real world. (ie. Neural networks/genetic algorithms)
8. There is no one size fits all solution when it comes to data mining.
9. You help make the statement "I have more data than I know what to do with" obsolete.
10. Your results can make an immediate impact in whatever industry you are involved in.
Why do you like data mining today? What got you interested in the first place?
2. The methods are complicated, but the applications are intuitive.
3. It can save/make lots of money.
4. Data mining has applications in nearly any area you can think of.
5. You get to deal with data sets larger than you could ever process in your mind.
6. There are big developments taking place in the industry.
7. Data mining algorithms attempt to model how things work in biology and the real world. (ie. Neural networks/genetic algorithms)
8. There is no one size fits all solution when it comes to data mining.
9. You help make the statement "I have more data than I know what to do with" obsolete.
10. Your results can make an immediate impact in whatever industry you are involved in.
Why do you like data mining today? What got you interested in the first place?
Friday, February 22, 2008
Data Mining in the Workplace
I graduate in a few months and so I've been job hunting lately. I attended the Technical Career Fair here at BYU a few weeks back and I was impressed by the number of companies that were interested in data mining. With the exception of one or two companies, they all either were currently involved in data mining or were interested in becoming involved in the near future. I think that as more and more companies amass mounds of data, they are realizing that collecting data for data's sake is useless and that they can get much more out of their data than they have in the past. Data mining is no longer 'a hiss and a byword'. I am witnessing firsthand that it is the direction that many companies are taking to improve the efficiency of their operations.
Wednesday, February 20, 2008
Our Lab in Utah CEO Magazine
The BYU Data Mining Lab is featured in an article published in this months Utah CEO Magazine. The article, found here, includes expert opinions from our own Professor Christophe Giraud-Carrier on why finding a champion for data mining within a company is important and how successful data mining is defined. In addition, the article contains a short feature on the lab which explains the benefits students and businesses gain from being involved with the lab. It is exciting to see outside recognition for the great work that goes on here everyday.
Labels:
Data Mining,
Data Mining Lab,
Utah CEO Magazine
Wednesday, February 13, 2008
Social Connections in Decline
Robert Putnam, an influential social capital researcher, visited BYU nearly two years ago to discuss how social connections are on the decline. Here is good summary of Putnam's talk on BYU NewsNet. His research during the past decade has shown a negative trend in that people are socially connecting less these days. The speech gave fuel to the research on social networks that we had been involved in and has been a strong motivation to our current work on social capital.
Figure 1. "The TV Connection" shows that group membership tends to decline as television viewing increases among those having twelve or more years of education. (see The Strange Disappearance of Civic America)
Empirical studies on group membership, like the study shown in the plot above contribute to the evidence which Putnam uses to support this claim.
(Note: This article was originally posted on dmine.blogspot.com)
Figure 1. "The TV Connection" shows that group membership tends to decline as television viewing increases among those having twelve or more years of education. (see The Strange Disappearance of Civic America)
Empirical studies on group membership, like the study shown in the plot above contribute to the evidence which Putnam uses to support this claim.
(Note: This article was originally posted on dmine.blogspot.com)
Data Mining Search Engine
I recently learned at the Data Mining Research blog about a data mining search engine. The search engine, which can be found here, allows search queries to be performed so that the results come largely from a list of data mining sites. It might prove to be a useful tool for focusing your research on trusted data mining sites, or for discovering new resources in our field of interest. Give it a shot, I don't have much experience with custom Google Search engines, but it seems useful.
Monday, February 11, 2008
Resolving Blog Entities
Problem: How do you determine whether a particular url is associated with a feed? For example, if another blog posted a link to datamining.blogspot.com, how would you determine the feed (http://datamininglab.blogspot.com/feeds/posts/default) associated with that url?
Solution: In our research we perform two operations to determine whether a url has an associated feed. First, we determine whether the url represents an actual feed. This can usually be determined by submitting an http request and checking the content-type header included in the response. If the content-type is "application/rss+xml", "application/atom+xml","application/rdf+xml" or "text/xml" then you are probably dealing with a feed.
Second, you need to check to see if the url is not a feed, but is associated with a feed. This would be the case in situations where a url was to the front page or a specific entry of a blog. If the content-type in the http response, as describe in step one, was not a feed, then you would parse the "link" tags found between the "head" tags. If a "link" tag has a "rel=alternate" attribute then you can check the type attribute to see if it has a value equal to "application/rss+xml" or "application/atom+xml" similar to what we did in step one. If it does, then you can parse the value of the href attribute to retrieve the feed url associated with the url. For example, on the main page of our blog, if you look at the page source, you will see link tags to both the rss and atom feeds associated with our blog.
There are certainly other ways for resolving blog entities, but this seems to work fairly consistently. Feel free to chime in if you have any ideas on how to better accomplish this task.
Solution: In our research we perform two operations to determine whether a url has an associated feed. First, we determine whether the url represents an actual feed. This can usually be determined by submitting an http request and checking the content-type header included in the response. If the content-type is "application/rss+xml", "application/atom+xml","application/rdf+xml" or "text/xml" then you are probably dealing with a feed.
Second, you need to check to see if the url is not a feed, but is associated with a feed. This would be the case in situations where a url was to the front page or a specific entry of a blog. If the content-type in the http response, as describe in step one, was not a feed, then you would parse the "link" tags found between the "head" tags. If a "link" tag has a "rel=alternate" attribute then you can check the type attribute to see if it has a value equal to "application/rss+xml" or "application/atom+xml" similar to what we did in step one. If it does, then you can parse the value of the href attribute to retrieve the feed url associated with the url. For example, on the main page of our blog, if you look at the page source, you will see link tags to both the rss and atom feeds associated with our blog.
There are certainly other ways for resolving blog entities, but this seems to work fairly consistently. Feel free to chime in if you have any ideas on how to better accomplish this task.
Wednesday, February 6, 2008
Lab Spotlight
At first glance, the Data Mining Lab looks like your average computer science research lab. But conducting research in the data mining lab is not your average research experience. I'll explain what I mean by citing three areas that help make our lab experience great.
The People - French, Spanish, Portuguese, Korean, Hmong, Tagalog, Chinese. All languages spoken by members of the data mining lab. It is one example of the diverse capabilities belonging to members of the lab. We love computer science. We love data mining. We love python, open source and neural nets (if its possible to love a neural net). But we also appreciate politics, religion, cooking and anything else that is meaningful...or at least interesting. I remember heated discussions last semester about state funded private school vouchers, the political primaries and caucuses and the grammatically correct way to use the word "good." Being in the lab everyday has enriched my views on the world and made me a more rounded person.
The Activities - All work and no play? Makes you want to run away. Which is why we appreciate the many activities of the data mining lab. Each week we have a lab meeting/potluck where we discuss our progress. Curry, kimchi or burritos would make any meeting more exciting. Occasionally we will attend the campus devotionals/forums or the department colloquiums together as a lab. For example, last week our lab went and listened to Paul Rusesabagina tell his inspirational story, which was portrayed in the movie Hotel Rwanda. In addition to these weekly activities, each semester we meet together at our advisor Christophe's home for a lab social. The food is always fabulous and we even get to bring along our families which helps us bond even more. All of these activities help to make our lab experience unique.
The Research Environment- When it comes down to it, the reason we are all here is because we enjoy researching data mining, and the data mining lab is the perfect place to do it. We are given flexibility to research what is most interesting to us and are given the tools to be successful. This is mostly due to our adviser Christophe, who is flexible and supportive of our aspirations, while also helping us to investigate the feasibility and usefulness of the research topics we are considering. When we run into problems in our research, there is almost always someone in the lab with a helpful idea or suggestion. You may begin discussing a question with one lab member, but it usually isn't long before the whole lab is involved. There are also plenty of opportunities to publish papers, present at conferences and work with real companies. I can't think of a better environment for conducting research than we have here.
These are just a few of the reasons why it is awesome to be a member of the data mining lab. This is the place to be.
The People - French, Spanish, Portuguese, Korean, Hmong, Tagalog, Chinese. All languages spoken by members of the data mining lab. It is one example of the diverse capabilities belonging to members of the lab. We love computer science. We love data mining. We love python, open source and neural nets (if its possible to love a neural net). But we also appreciate politics, religion, cooking and anything else that is meaningful...or at least interesting. I remember heated discussions last semester about state funded private school vouchers, the political primaries and caucuses and the grammatically correct way to use the word "good." Being in the lab everyday has enriched my views on the world and made me a more rounded person.
The Activities - All work and no play? Makes you want to run away. Which is why we appreciate the many activities of the data mining lab. Each week we have a lab meeting/potluck where we discuss our progress. Curry, kimchi or burritos would make any meeting more exciting. Occasionally we will attend the campus devotionals/forums or the department colloquiums together as a lab. For example, last week our lab went and listened to Paul Rusesabagina tell his inspirational story, which was portrayed in the movie Hotel Rwanda. In addition to these weekly activities, each semester we meet together at our advisor Christophe's home for a lab social. The food is always fabulous and we even get to bring along our families which helps us bond even more. All of these activities help to make our lab experience unique.
The Research Environment- When it comes down to it, the reason we are all here is because we enjoy researching data mining, and the data mining lab is the perfect place to do it. We are given flexibility to research what is most interesting to us and are given the tools to be successful. This is mostly due to our adviser Christophe, who is flexible and supportive of our aspirations, while also helping us to investigate the feasibility and usefulness of the research topics we are considering. When we run into problems in our research, there is almost always someone in the lab with a helpful idea or suggestion. You may begin discussing a question with one lab member, but it usually isn't long before the whole lab is involved. There are also plenty of opportunities to publish papers, present at conferences and work with real companies. I can't think of a better environment for conducting research than we have here.
These are just a few of the reasons why it is awesome to be a member of the data mining lab. This is the place to be.
Labels:
Data Mining,
Data Mining Lab,
Research Opportunities
Tuesday, February 5, 2008
Meta-learning
I just finished reading Rice's seminal paper on algorithm selection [Rice, J.R. (1976). The Algorithm Selection Problem. Advances in Computers, 15:65-118]. For obvious reasons, it does not talk about meta-learning (look at the date!) but meta-learning is clearly one natural approach to solving the algorithm selection problem.
Kate Smith-Miles recently wrote a very nice survey paper (to appear in ACM Computing Surveys) where she uses Rice's framework to review and describe most known attempts at algorithm selection.
Rice does indeed offer a very clean formalism for the problem of algorithm selection, where a problem X from some problem space P is mapped, via some feature extraction process, to a representation f(X) in some feature space F, and the selection algorithm S maps f(X) to some algorithm Y in some algorithm space A, so that the performance of Y on X (for some adequately chosen performance measure) is in some sense optimal. Hence, as pointed out, "the selection mapping now depends only on the features f(X), yet the performance mapping still depends on the problem X" and, of course, "the determination of the best (or even good) features is one o the most important, yet nebulous, aspects of the algorithm selection process."
Rice is also quick to point out that "ideally, those problems with the same features would have the same performance for any algorithm being considered." I actually also pointed that out in my recent paper[Giraud-Carrier, C. (2005). The Data Mining Advisor: Meta-learning at the Service of Practitioners. In Proceedings of the 4th International Conference on Machine Learning Applications, 113-119] where I stated that unless for all X and X' (X <> X'), f(X)=f(X') implies p(X)=p(X') (where p is the performance measure) then the meta-training set may be noisy and meta-learning may in turn be sub-optimal.
Rice's framework naturally covers various forms of selection (e.g., best algorithm, best algorithm for a subclass of problems, etc.) as well as multi-criteria performance measures.
Another important point brought out by Rice, and often overlooked in the machine learning community, is that "most algorithms are developed for a particular class of problems even though the class is never explicitly defined. Thus the performance of algorithms is unlikely to be understood without some idea of the problem class associated with their development. Foster Provost and I called that the Strong Assumption of Machine Learning in our paper on the justification of meta-learning [Giraud-Carrier, C. and Provost, F. (2005). Towards a Justification of Meta-learning: Is the No Free Lunch Theorem a Show-stopper. In Proceedings of the ICML-05 Workshop on Meta-learning, 12-19]. I (and others) have often argued that the notion of delimiting the class of problems on which an algorithm performs well is critical to advances in machine learning.
Anyways, although Rice offers no specific method to solve the algorithm selection problem, the paper is highly relevant and very well-written. A must read for anyone interested in meta-learning.
Monday, February 4, 2008
Social Capital Simulation
Our recent work has explored the concept of social capital, which I have discussed previously. Our social capital metrics, namely bonding and bridging (popularized by Robert Putnam), utilize the hybrid network methodology that we have developed for online communities.
To understand our metrics, I have created a basic social capital simulation (an excel spreadsheet) having five nodes. The simulation allows for you to change the connection strengths in both the implicit affinity network (IAN) or explicit social network (ESN). Changing these values will give you an idea of how social capital fluctuates as the social network changes.
The figure above shows the initial configuration of the simulation. The dashed blue lines represent the IAN and the solid pink lines represent the ESN. The thicker the lines the stronger the connection. The weights for the IAN were randomly assigned, while the ESN weights were all set to one, thus creating a clique.
Initially, the bonding and bridging social capital are both 1, since everyone in the network is connected. To see how the social capital fluctuates, change the blue and/or pink values, again representing the IAN and the ESN weights respectively, in the spreadsheet.
(Note: This article was originally posted on dmine.blogspot.com)
To understand our metrics, I have created a basic social capital simulation (an excel spreadsheet) having five nodes. The simulation allows for you to change the connection strengths in both the implicit affinity network (IAN) or explicit social network (ESN). Changing these values will give you an idea of how social capital fluctuates as the social network changes.
The figure above shows the initial configuration of the simulation. The dashed blue lines represent the IAN and the solid pink lines represent the ESN. The thicker the lines the stronger the connection. The weights for the IAN were randomly assigned, while the ESN weights were all set to one, thus creating a clique.
Initially, the bonding and bridging social capital are both 1, since everyone in the network is connected. To see how the social capital fluctuates, change the blue and/or pink values, again representing the IAN and the ESN weights respectively, in the spreadsheet.
(Note: This article was originally posted on dmine.blogspot.com)
Google Reader API
Problem: To perform social network analysis on blog data you need consistent data over a period of time. Periodically retrieving the content directly from the blog's feed has its limitations because you can only retrieve current blog content. Thus if you decide to begin retrieving content from a specific blog, you have no way at getting at the archived blog content.
Solution: Use the unofficial Google Reader API to retrieved archived feed content. The API was first documented two years ago at Nial Kennedy's blog and its reality was confirmed by several Google employees associated with the project. Little information has been published since as to an official release of the API, but the unofficial API still works great for retrieving archived feed content.
In our research the framework we use for interacting with the API is pyrfeed. The creators or pyrfeed also did some additional documentation on the capabilities of the API. The Google Code site has two downloadable files. The Google Reader stand alone is a simple interface for interacting with the API to perform simple actions such as feed retrieval. The other file, which is the full pyrfeed release, also provides gui and command line interfaces for interacting with the API and automated blog content storage in a mysqlite3 database. An example how to interact with the Google Reader stand alone package can be seen below.
In summary, if you are looking for a simple way to retrieve archived blog content, the Google Reader API and pyrfeed framework are cheap and easy tools for doing so. The blogosphere is at your fingertips.
Solution: Use the unofficial Google Reader API to retrieved archived feed content. The API was first documented two years ago at Nial Kennedy's blog and its reality was confirmed by several Google employees associated with the project. Little information has been published since as to an official release of the API, but the unofficial API still works great for retrieving archived feed content.
In our research the framework we use for interacting with the API is pyrfeed. The creators or pyrfeed also did some additional documentation on the capabilities of the API. The Google Code site has two downloadable files. The Google Reader stand alone is a simple interface for interacting with the API to perform simple actions such as feed retrieval. The other file, which is the full pyrfeed release, also provides gui and command line interfaces for interacting with the API and automated blog content storage in a mysqlite3 database. An example how to interact with the Google Reader stand alone package can be seen below.
In summary, if you are looking for a simple way to retrieve archived blog content, the Google Reader API and pyrfeed framework are cheap and easy tools for doing so. The blogosphere is at your fingertips.
Wednesday, January 30, 2008
Social Capital?
My last post was titled Social Capital in the Blogosphere and dealt with the experiments we are conducting into the social capital found in blog networks. When you saw the title, some of you probably wondered social capital...what's that? I do not claim to be an expert on social capital, but I have a fair idea of what it is and how it is useful. Interpretations vary, but our idea of social capital has been motivated by that of Robert Putnam, author of Bowling Alone who came and spoke here at BYU last year.
In many realms who you know may matter just as what you know. The value represented by these connections in a social network is known as the social capital of that network. In our work, we compute the social capital of a blog network by using a mathematical formula that takes into account both the actual and potential bonding (connectons with similar people) and bridging (connections with dissimilar people) of blog networks. A more detailed description of this formula can be found in this paper. Matt also recently posted about other ways of measuring social capital. His findings can be found here and here. The social capital of a network can then be used to determine how much value furthering connections in that social network would have. In our example, it would tell you whether or not you should attempt to establish a place in a certain blog network. Thanks for your comments, hopefully this gives you a good intro to social capital in the context we are using it.
In many realms who you know may matter just as what you know. The value represented by these connections in a social network is known as the social capital of that network. In our work, we compute the social capital of a blog network by using a mathematical formula that takes into account both the actual and potential bonding (connectons with similar people) and bridging (connections with dissimilar people) of blog networks. A more detailed description of this formula can be found in this paper. Matt also recently posted about other ways of measuring social capital. His findings can be found here and here. The social capital of a network can then be used to determine how much value furthering connections in that social network would have. In our example, it would tell you whether or not you should attempt to establish a place in a certain blog network. Thanks for your comments, hopefully this gives you a good intro to social capital in the context we are using it.
Monday, January 28, 2008
Social Capital in the Blogosphere
For the past year, Matt and I (Nate) have been conducting research into the social capital that can be found online in blog networks. Why should you care? Well, first off, you're reading a blog so you must have some interest in the overall blogging community. But more importantly, blogs are being used to establish the identity of people, places and products. In today's online age, the potential of blogs is tremendous. Here's a quick summary of our research.
What: Analysis has been done on the explicit connections (links, comments, friend lists) between blogs. Little work has been done regarding the implicit connections (interests, hobbies, location) that exist between blog authors. We are conducting research into methods of using both explicit and implicit connections in social network analysis.
How: We have retrieved a large archive of blog content for use in our research. An explicit social network of the content is created from the hyper links found in the blog content. Using topic extraction methods such as Latent Dirichlet Allocation, a network of implicit connections is constructed. Overlaying the implicit network on the explicit network allows for potential and actual connections or social capital to be identified. A example graphical representation of one of these networks, which we created using Cytoscape, is found below.
Why: Information about actual social communities, and the implicit similarities that exist between them, can be used to recommend potentially valuable actions that could be taken. For example, a politician could contact influential blogs and attempt to convince them to lead a grassroots campaign for his candidacy. A doctor could use social network analysis to identify and coordinate with colleagues in order to help patients with rare diseases. Companies could approach blogs that are found in the center of their customer market about participating in usability testing or marketing campaigns. Conducting social network analysis on blogging communities has valuable potential in many domains.
You can learn more of the details about our research here at our lab wiki.
What: Analysis has been done on the explicit connections (links, comments, friend lists) between blogs. Little work has been done regarding the implicit connections (interests, hobbies, location) that exist between blog authors. We are conducting research into methods of using both explicit and implicit connections in social network analysis.
How: We have retrieved a large archive of blog content for use in our research. An explicit social network of the content is created from the hyper links found in the blog content. Using topic extraction methods such as Latent Dirichlet Allocation, a network of implicit connections is constructed. Overlaying the implicit network on the explicit network allows for potential and actual connections or social capital to be identified. A example graphical representation of one of these networks, which we created using Cytoscape, is found below.
Why: Information about actual social communities, and the implicit similarities that exist between them, can be used to recommend potentially valuable actions that could be taken. For example, a politician could contact influential blogs and attempt to convince them to lead a grassroots campaign for his candidacy. A doctor could use social network analysis to identify and coordinate with colleagues in order to help patients with rare diseases. Companies could approach blogs that are found in the center of their customer market about participating in usability testing or marketing campaigns. Conducting social network analysis on blogging communities has valuable potential in many domains.
You can learn more of the details about our research here at our lab wiki.
Wednesday, January 23, 2008
What is the Data Mining Lab?
The Data Mining Lab is a research lab hosted by the Computer Science Department at Brigham Young University. We research methods for extracting valuable knowledge from data. Data mining can be applied to a wide range of business and scientific problems. Almost everyone gathers data, and we go about finding ways to make that data useful.
Current areas of research include The purpose of this blog is to establish connections and further our collaboration with others who share our same interests. We will be publishing information that we have gained from our research, and invite others to share their insights here as well. Feel free to contact us by posting comments or by email. More contact information can be found at our lab website located at http://dml.cs.byu.edu.
Current areas of research include The purpose of this blog is to establish connections and further our collaboration with others who share our same interests. We will be publishing information that we have gained from our research, and invite others to share their insights here as well. Feel free to contact us by posting comments or by email. More contact information can be found at our lab website located at http://dml.cs.byu.edu.
Subscribe to:
Posts (Atom)