WordNet is a semantic lexicon for the English language that computational lingui
ID: 3692352 • Letter: W
Question
WordNet is a semantic lexicon for the English language that computational linguists and cognitive scientists use extensively. For example, WordNet was a key component in IBM's Jeopardy-playing Watsoncomputer system. WordNet groups words into sets of synonyms called synsets. For example, { AND circuit, AND gate } is a synset that represent a logical gate that fires only when all of its inputs fire. WordNet also describes semantic relationships between synsets. One such relationship is the is-a relationship, which connects a hyponym (more specific synset) to a hypernym (more general synset). For example, the synset { gate, logic gate } is a hypernym of { AND circuit, AND gate } because an AND gate is a kind of logic gate.
The WordNet digraph. Your first task is to build the WordNet digraph: each vertex v is an integer that represents a synset, and each directed edge vw represents that w is a hypernym of v. The WordNet digraph is a rooted DAG: it is acyclic and has one vertex—the root—that is an ancestor of every other vertex. However, it is not necessarily a tree because a synset can have more than one hypernym. A small subgraph of the WordNet digraph appears below.
The WordNet input file formats. We now describe the two data files that you will use to create the WordNet digraph. The files are in comma-separated values (CSV) format: each line contains a sequence of fields, separated by commas.
List of synsets. The file synsets.txt contains all noun synsets in WordNet, one per line. Line i of the file (counting from 0) contains the information for synset i. The first field is the synset id, which is always the integer i; the second field is the synonym set (or synset); and the third field is its dictionary definition (or gloss), which is not relevant to this assignment.
For example, line 36 means that the synset { AND_circuit, AND_gate } has an id number of 36 and its gloss is a circuit in a computer that fires only when all of its inputs fire. The individual nouns that constitute a synset are separated by spaces. If a noun contains more than one word, the underscore character connects the words (and not the space character).List of hypernyms. The file hypernyms.txt contains the hypernym relationships. Line i of the file (counting from 0) contains the hypernyms of synset i. The first field is the synset id, which is always the integer i; subsequent fields are the id numbers of the synset's hypernyms.
For example, line 36 means that synset 36 (AND_circuit AND_Gate) has 42338 (gate logic_gate) as its only hypernym. Line 34 means that synset 34 (AIDS acquired_immune_deficiency_syndrome) has two hypernyms: 47569 (immunodeficiency) and 56099 (infectious_disease).
WordNet data type. Implement an immutable data type WordNet with the following API:
Corner cases. All methods and the constructor should throw a java.lang.NullPointerException if any argument is null. The distance() and sca() methods should throw ajava.lang.IllegalArgumentException unless both of the noun arguments are WordNet nouns. You may assume that the input files are in the specified format (and that the underlying digraph is a rooted DAG).
Performance requirements. Your data type should use space linear in the input size (size of synsets and hypernyms files). The constructor should take time linearithmic (or better) in the input size. The methodisNoun() should run in time logarithmic (or better) in the number of nouns. The methods distance() and sca() should make exactly one call to the length() and ancestor() methods inShortestCommonAncestor, respectively. For the analysis, assume that the number of characters in a noun or synset is bounded by a constant.
Shortest common ancestor. An ancestral path between two vertices v and w in a rooted DAG is a directed path from v to a common ancestor x, together with a directed path from w to the same ancestor x. Ashortest ancestral path is an ancestral path of minimum total length. We refer to the common ancestor in a shortest ancestral path as a shortest common ancestor. Note that a shortest common ancestor always exists because the root is an ancestor of every vertex. Note also that an ancestral path is a path, but not a directed path.
We generalize the notion of shortest common ancestor to subsets of vertices. A shortest ancestral path of two subsets of vertices A and B is a shortest ancestral path over all pairs of vertices v and w, with v in Aand w in B.
Shortest common ancestor data type. Implement an immutable data type ShortestCommonAncestor with the following API:
Corner cases. All methods and the constructor should throw a java.lang.NullPointerException if any argument is null or if any iterable argument contains a null item. The constructor should throw ajava.lang.IllegalArgumentException if the digraph is not a rooted DAG. The methods length() and ancestor() should throw a java.lang.IndexOutOfBoundsException if any argument vertex is invalid and ajava.lang.IllegalArgumentException if any iterable argument contains zero vertices.
Basic performance requirements. Your data type should use space proportional to E + V, where E and V are the number of edges and vertices in the digraph, respectively. All methods and the constructor should take time proportional to E + V (or better). You will receive most of the credit for meeting these basic requirements.
Additional performance requirements. For full credit, in addition, the methods length() and ancestor() should take time proportional to the number of vertices and edges reachable from the argument vertices (or better), For example, to compute the shortest common ancestor of v and w in the digraph below, your algorithm can examine only the highlighted vertices and edges and it cannot initialize any vertex-indexed arrays.
Test client. The following test client takes the name of a digraph input file as as a command-line argument, creates the digraph, reads in vertex pairs from standard input, and prints the length of the shortest ancestral path between the two vertices along with a shortest common ancestor:
Here is a sample execution (the bold indicates what you type):
Measuring the semantic relatedness of two nouns. Semantic relatedness refers to the degree to which two concepts are related. Measuring semantic relatedness is a challenging problem. For example, you consider George W. Bush and John F. Kennedy (two U.S. presidents) to be more closely related than George W. Bush and chimpanzee (two primates). It might not be clear whether George W. Bush and Eric Arthur Blair are more related than two arbitrary people. However, both George W. Bush and Eric Arthur Blair (aka George Orwell) are famous communicators and, therefore, closely related.
We define the semantic relatedness of two WordNet nouns x and y as follows:
A = set of synsets in which x appears
B = set of synsets in which y appears
distance(x, y) = length of shortest ancestral path of subsets A and B
sca(x, y) = a shortest common ancestor of subsets A and B
This is the notion of distance that you will use to implement the distance() and sca() methods in the WordNet data type.
Outcast detection. Given a list of WordNet nouns x1, x2, ..., xn, which noun is the least related to the others? To identify an outcast, compute the sum of the distances between each noun and every other one:
di = distance(xi, x1) + distance(xi, x2) + ... + distance(xi, xn)
and return a noun xt for which dt is maximum. Note that because distance(xi, xi) = 0, it will not contribute to the sum.
Implement an immutable data type Outcast with the following API:
Assume that argument to outcast() contains only valid WordNet nouns (and that it contains at least two such nouns).
The following test client takes from the command line the name of a synset file, the name of a hypernym file, followed by the names of outcast files, and prints out an outcast in each file:
Here is a sample execution:
Analysis of running time. Analyze the potential effectiveness of your approach to this problem by answering the questions below:
What is the order of growth of the worst-case running time of the length() and ancestor() methods in ShortestCommonAncestor?
What is the order of growth of the best-case running time of of the length() and ancestor() methods in ShortestCommonAncestor?
Your answers should be given as a function of the number of vertices V and the number of edges E in the digraph.
Deliverables. Submit WordNet.java, ShortestCommonAncestor.java, and Outcast.java that implement the APIs described above, along with a readme.txt file. Also submit any other supporting files (excluding those in algs4.jar). You may not call any library functions other than those in java.lang, java.util, and algs4.jar.
Explanation / Answer
import java.io.IOException ;
import java.util.LinkedList ;
import java.util.Set ;
import java.io.BufferedReader ;
import java.util.TreeSet ;
import java.util.Hashtable ;
import java.util.TreeMap ;
import java.io.FileNotFoundException ;
import java.io.FileReader ;
public class WordNet {
TreeMap < String , Set < Integer > > nounToIndx ;
Hashtable < Integer ,Queue < String > > indxToSynset ;
int bulk =0 ;
Digraph M ;
public WordNet ( String synsets ,String hypernyms )
{
if( synsets == null ||hypernyms == null )throw new NullPointerException ( " synset / hypernym doc is blank " ) ;
// read in Synsets 4m doc : write hashtable &bst
this.indxToSynset = readFromSynsets ( synsets ) ;
// build Digraph 4m hypernym doc
this.M = designGraph ( hypernyms ) ;
}
private Hashtable < Integer ,Queue < String > > readFromSynsets ( String docName )
{
TreeMap < String , Set < Integer > > nounToIndx =new TreeMap < String , Set < Integer > > ( ) ;
Hashtable < Integer ,Queue < String > > indxToSynset =new Hashtable < Integer ,Queue < String > > ( ) ;
String synsetBreakBy =" " ;
String breakCvsBy ="," ;
BufferedReader buff =null ;
String bar ="" ;
int compute =0 ;
try
{
buff =new BufferedReader ( new FileReader ( docName ) ) ;
while (( bar =buff.readLine ( ) ) !=null) {
// comma to spilt
//d: 0^th is indx ,first is synsets ,second is define
String [] c =bar.split ( breakCvsBy ) ;
int indx = Integer.valueOf ( c[0] ) ;
String[] terms = c[1].split ( synsetBreakBy ) ;
// build BST & hashtable
Queue < String > synset =new Queue < String > ( ) ;
for ( int i =0 ;i < terms.length ;i++ )
{
String s =terms[ i ] ;
// add terms in d synset 2 d bag
synset.enqueue ( s ) ;
// if nothing previously in tree , add indx 2 place of set
Set < Integer > identicalterms = nounToIndx.get ( s ) ;
if( identicalterms !=null )
{
identicalterms.add ( indx ) ;
}
else
{
Set < Integer > nounToSynset =new TreeSet < Integer > ( ) ;
nounToSynset.add ( indx ) ;
nounToIndx.put ( s ,nounToSynset ) ;
}
}
// location d bag of terms in hashtable
indxToSynset.put ( indx ,synset ) ;
compute++ ;
}
}
catch( FileNotFoundException e )
{
e.printStackTrace ( ) ;
}
catch( IOException e )
{
e.printStackTrace ( ) ;
}
finally {
if( buff !=null )
{
try
{
buff.close ( ) ;
}
catch( IOException e )
{
e.printStackTrace ( ) ;
}
}
}
setArea ( compute ) ;
this.nounToIndx =nounToIndx ;
return indxToSynset ;
}
private Digraph designGraph ( String docName )
{
String bar ="" ;
BufferedReader buff =null ;
String breakCvsBy ="," ;
// build Digraph
Digraph DM =new Digraph ( this.bulk ) ;
try
{
buff =new BufferedReader ( new FileReader ( docName ) ) ;
while (( bar =buff.readLine ( ) ) !=null) {
// comma to split
String [] vertxToEdgs =bar.split ( breakCvsBy ) ;
//edge go 4m vertx==> vertxToEdgs [ i ]
int vertx =Integer.valueOf ( vertxToEdgs [ 0 ] ) ;
//add edge 2 Digraph
for ( int i =1 ;i <vertxToEdgs.length ;i++ )
{
DM.addEdge ( vertx ,Integer.valueOf ( vertxToEdgs [ i ] ) ) ;
}
}
}
catch( FileNotFoundException e )
{
e.printStackTrace ( ) ;
}
catch( IOException e )
{
e.printStackTrace ( ) ;
}
finally {
if( buff !=null )
{
try
{
buff.close ( ) ;
}
catch( IOException e )
{
e.printStackTrace ( ) ;
}
}
}
return DM ;
}
private void setArea ( int bulk )
{
this.bulk =bulk ;
}
//all WordNet nouns
public Iterable < String > nouns ( )
{
LinkedList < String > nouns =new LinkedList < String > ( ) ;
// loop across BST
for(String a : nounToIndx.keySet ( ) )
{
nouns.add ( a ) ;
}
return nouns ;
}
//is d word a WordNet noun ?
public boolean isNoun ( String word )
{
if( word == null ) throw new NullPointerException ( " isNoun : null isn't acceptable parameters " ) ;
// find bst
return nounToIndx.carrysKey ( word ) ;
}
public String sca ( String noun1 ,String noun2 )
{
Set < Integer > b =nounToIndx.get ( noun2 ) ;
Set < Integer > a =nounToIndx.get ( noun1 ) ;
if( b == null )throw new IllegalArgumentException ( " noun n't found : " +b ) ;
if( a == null )throw new IllegalArgumentException ( " noun n't found : " +a ) ;
ShortestCommonAncestor sca =new ShortestCommonAncestor ( M ) ;
int ancIndx =sca.ancestor ( a ,b ) ;
Queue < String > ancestrSyns =indxToSynset.get ( ancIndx ) ;
String synset="" ;
for ( String c :ancestrSyns )
{
synset =synset +" " +c ;
}
return synset ;
}
public int distance ( String noun1 ,String noun2 )
{
Set < Integer > a =nounToIndx.get ( noun1 ) ;
Set < Integer > b =nounToIndx.get ( noun2 ) ;
if( b == null )throw new IllegalArgumentException ( " noun n't found : " +b ) ;
if( a == null )throw new IllegalArgumentException ( " noun n't found : " +a ) ;
ShortestCommonAncestor sca =new ShortestCommonAncestor ( M ) ;
return sca.lngth ( a ,b ) ;
}
public static void main ( String [] args )
{
WordNet a =new WordNet ( "synsets.txt" ,"hypernyms.txt" ) ;
for ( String b :a.nouns ( ) )
{
System.out.println ( b ) ;
}
System.out.println ( " Should carry '' hood ' :" +a.isNoun ( " 'hood " ) ) ;
System.out.println ( " Should carry ' Abramis ' :" +a.isNoun ( " Abramis " ) );
System.out.println ( " Should carry ' 1530s ' :" +a.isNoun ( " 1530s " ) ) ;
System.out.println("Shouldn't carry 'kjhvkvkuyc': " + !a.isNoun("kjhvkvkuyc"));
WordNet b =new WordNet ( "synsets_custom.txt" ,"hypernyms_custom.txt" ) ;
for ( String c :b.nouns ( ) )
{
System.out.println ( c ) ;
}
System.out.println ( " Should carry ' hello ':" +b.isNoun ( " hello " ) ) ;
System.out.println ( " Should carry ' bye ':" +b.isNoun ( " bye " ) ) ;
System.out.println ( " Shouldn't carry ' kjhvkvkuyc ' :" +!b.isNoun ( " kjhvkvkuyc " ) ) ;
System.out.println ( " SCA of ' see_you_later ' & ' hi ' is ' Greetings ' is three away :" +b.sca ( " see_you_later " ," hi " )+"," +b.distance ( " see_you_later " ," hi " ) ) ;
System.out.println ( " SCA of ' Goodbye ' & ' hi ' is ' Greetings ' is two away :" +b.sca ( " goodbye " ," hi " ) +"," +b.distance ( " goodbye " ," hi " ) ) ;
}
}
---------------------------------------------------------------------
import java.util.LinkedList ;
import java.util.Hashtable ;
public class ShortestCommonAncestor
{
private cycle a ;
protected Digraph M ;
public ShortestCommonAncestor ( Digraph M )
{
if( M == null )throw new NullPointerException ( " ShortestCommonAncestor Constructor yelled with null parameter " ) ;
cycle a =detectCycle ( M ) ;
this.a =a ;
if ( a.is_Cyclic )
{
System.out.println ( "Cycle removed! Back Edge obtains at : [ " +a.t.getFirst ( ) +" , " +a.t.getLast ( ) +" ] " ) ;
throw new IllegalArgumentException ( " Cyclic graph removed " ) ;
}
this.M =M ;
}
public int lngth( Integer v ,Integer w )
{
if( v == null ||w == null )throw new NullPointerException ( " lngth yelled with null parameter: parm1 = " +v +" parm2 = " +w ) ;
if ( v > M.V ( ) ||v < 0 )throw new IllegalArgumentException ( " lngth yelled with node ( " +v + " ) not in Digraph " ) ;
if ( w > M.V ( ) ||w < 0 )throw new IllegalArgumentException ( " lngth yelled with node ( " +w + " )not in Digraph " ) ;
LinkedList < Integer > b =new LinkedList < Integer > ( ) ;
LinkedList < Integer > a =new LinkedList < Integer > ( ) ;
b.add ( w ) ;a.add ( v ) ;
return Delux_BFS ( a , b ).length ( ) ;
}
public int ancestor ( Integer v ,Integer w )
{
if( v == null ||w == null )throw new NullPointerException ( " ancestor yelled with null parameter: parm1 = " +v +" parm2 = " +w ) ;
if ( v > M.V ( ) ||v < 0 )throw new IllegalArgumentException ( " lngth yelled with node ( " +v + " )not in Digraph " ) ;
if ( w > M.V ( ) ||w < 0 )throw new IllegalArgumentException ( " lngth yelled with node ( " +w + " )not in Digraph " ) ;
LinkedList < Integer > a =new LinkedList < Integer > ( ) ;
LinkedList < Integer > b =new LinkedList < Integer > ( ) ;
a.add ( v ) ;
b.add ( w ) ;
return Delux_BFS ( a , b ).ancestor ( ) ;
}
public int lngth( Iterable < Integer > v ,Iterable < Integer > w )
{
if( v == null ||w == null )throw new NullPointerException ( " lngth yelled with null parameter " ) ;
if ( !v.iterator ( ).hasNext ( ) ||!w.iterator ( ).hasNext ( ) )throw new IllegalArgumentException ( " lngth yelled with empty collection " ) ;
return Delux_BFS ( v , w ).length ( ) ;
}
public int ancestor ( Iterable < Integer > v ,Iterable < Integer > w )
{
if( v == null ||w == null )throw new NullPointerException ( " ancestor yelled with null parameter " ) ;
return Delux_BFS ( v , w ).ancestor ( ) ;
}
private infoPath Delux_BFS ( Iterable < Integer > v ,Iterable < Integer > w )
{
Hashtable < Integer ,Point > st_W =new Hashtable < Integer ,Point > ( ) ;
Hashtable < Integer ,Point > st_V =new Hashtable < Integer ,Point > ( ) ;
Queue < Point > q =new Queue < Point > ( ) ;
for ( Integer i : v )
{
q.enqueue ( new Point ( i , null ,"v" ) ) ;
}
for ( Integer i :w )
{
q.enqueue ( new Point ( i , null ,"w" ) ) ;
}
while( !q.isEmpty ( ) )
{
Point p =q.dequeue ( ) ;
int pointKey =p.vlue ( ) ;
String pointSet =p.whih_Set ( ) ;
if ( pointSet.equals ( "v" ) )
{
if ( st_V.carrysKey ( pointKey ) ) { }
else
if( st_W.carrysKey ( pointKey ) )
{
return processPath ( p ,st_W ) ;
}
else {
st_V.put ( pointKey ,p) ;
for( Integer n: M.adj ( pointKey ) )q.enqueue ( new Point ( n ,p ,pointSet ) ) ;
}
}
else {
if ( st_W.carrysKey ( pointKey ) ) { }
else
if ( st_V.carrysKey ( pointKey ) )
{
return processPath ( p , st_V ) ;
}
else
{
st_W.put ( pointKey ,p ) ;
for( Integer n: M.adj ( pointKey ) )q.enqueue ( new Point ( n ,p ,pointSet ) ) ;
}
}
}
throw new RuntimeException ( " Delux_BFS :no path found " ) ;
}
private infoPath processPath ( Point p ,Hashtable < Integer , Point > st )
{
// lngth of path.
int lngth =0 ;
Point st_Point =st.get ( p.vlue ( ) ) ;
Trace a =chse_Ptr ( st_Point ,lngth ) ;
//present Memory-Trace
Trace b =chse_Ptr ( p ,a.length ( ) ) ;
lngth =b.length ( ) ;
Queue < Integer >total_Path =new Queue < Integer > ( ) ;
for ( Integer l :a.path ( ) )
{
total_Path.enqueue ( l ) ;
}
total_Path.dequeue ( ) ;
for ( Integer k :b.path ( ) )
{
total_Path.enqueue ( k ) ;
}
return new infoPath ( lngth ,p.vlue ( ) ,a.end ( ) ,b.end ( ) ,total_Path ) ;
}
private Trace chse_Ptr ( Point p ,int compute )
{
Stack < Integer > path =new Stack < Integer > ( ) ;
if( p ==null )
return new Trace ( 0 ,null ,path ) ;
Integer end_Path =p.vlue ( ) ;
path.push ( end_Path ) ;
while( p.prvious !=null )
{
end_Path =p.vlue ( ) ;
p =p.prvious ;
path.push ( p.vlue ( ) ) ;
compute++ ;
}
return new Trace ( compute ,end_Path ,path ) ;
}
private class Point {
private Integer vlue ;
private Point prvious ;
private String whih_Set ;
public Point ( Integer vlue ,Point prvious )
{
this.vlue =vlue ;
this.prvious =prvious ;
}
public Point ( Integer vlue ,Point prvious ,String whih_Set )
{
this.vlue =vlue ;
this.prvious =prvious ;
this.whih_Set =whih_Set ;
}
public int vlue ( )
{
return vlue ;
}
public Point prvious ( )
{
return prvious ;
}
public String whih_Set ( )
{
return whih_Set ;
}
public String toString ( )
{
return " Current :" +vlue ;
}
}
private class Trace
{
private Integer pathlngth ;
private Integer end_Nde ;
private Stack < Integer > path ;
public Trace ( Integer pathlngth ,Integer end_Nde ,Stack < Integer > path )
{
this.end_Nde =end_Nde ;
this.pathlngth =pathlngth ;
this.path =path ;
}
public Integer lngth ( )
{
return this.pathlngth ;
}
public Integer end ( )
{
return this.end_Nde ;
}
public Stack < Integer > path ( )
{
Stack < Integer > to_Retrn =new Stack < Integer > ( ) ;
for ( Integer a :path )
{
to_Retrn.push ( a ) ;
}
return to_Retrn ;
}
}
private class infoPath
{
private Integer pathlngth ;
private Integer commn_Ancestr ;
private Integer oneEnd ;
private Integer secndEnd ;
private Queue < Integer >path ;
public infoPath ( Integer pathlngth ,Integer commn_Ancestr ,Integer oneEnd ,Integer secndEnd ,Queue < Integer > path)
{
this.commn_Ancestr =commn_Ancestr ;
this.pathlngth =pathlngth ;
this.oneEnd =oneEnd ;
this.secndEnd =oneEnd ;
this.path =path ;
}
public Integer lngth ( )
{
return this.pathlngth ;
}
public Integer ancestor ( )
{
return this.commn_Ancestr ;
}
public Queue < Integer >path ( )
{
Queue < Integer >to_Retrn =new Queue < Integer > ( ) ;
for ( Integer a :path )
{
to_Retrn.enqueue ( a ) ;
}
return to_Retrn ;
}
public Integer oneEnd ( )
{
return this.oneEnd ;
}
public Integer secndEnd ( )
{
return this.secndEnd ;
}
}
public enum Status
{
vistng ,visted ,ntVisted ;
}
private class cycle
{
public boolean is_Cyclic ;
public LinkedList < Integer > t ;
public cycle ( boolean is_Cyclic ,LinkedList < Integer >t )
{
this.is_Cyclic =is_Cyclic ;
this.t =t ;
}
}
public cycle detectCycle ( Digraph M )
{
Status[] s =new Status [ M.V ( ) ] ;
for ( int i =0 ;i <s.length ;i++ )
{
s [ i ] =Status.ntVisted ;
}
LinkedList < Integer >t =new LinkedList < Integer > ( ) ;
for ( int i =0 ;i <s.length ;i++ )
{
if( s [ i ] ==Status.visted )
continue ;
else
if (is_Cyclic ( i , s , M , t ))
return new cycle ( true ,t ) ;
}
return new cycle ( false ,t ) ;
}
public Iterable < Integer > cycleNodes ( )
{
return a.t ;
}
public boolean is_Cyclic ( )
{
return a.is_Cyclic ;
}
private boolean is_Cyclic ( Integer V ,Status [ ] s ,Digraph M ,LinkedList < Integer >t )
{
if ( s [ V ] ==Status.vistng )
return true ;
else
if( s [ V ] ==Status.visted )
return false ;
else
s [ V ] =Status.vistng ;
Iterable < Integer >adj_Vs =M.adj ( V ) ;
for ( Integer a :adj_Vs )
{
if ( is_Cyclic ( a , s , M , t ) )
{
t.add ( a ) ;
return true ;
}
}
s [ V ] =Status.visted ;
return false ;
}
public static void main ( String args [ ] )
{
In in =new In ( "digraph1.txt" ) ;
Digraph M =new Digraph ( in ) ;
ShortestCommonAncestor sca =new ShortestCommonAncestor ( M ) ;
int w=9 ;
int v=3 ;
System.out.println ( v +" ," +w +" :lngth 3 ==" +sca.lngth ( v ,w ) +" ,Ancestor 1 ==" + sca.ancestor ( v ,w ) ) ;
v =4 ;
w =5 ;
System.out.println ( v+" ," +w +" :lngth 2 ==" +sca.lngth ( v ,w ) +" ,Ancestor 1 =="+ sca.ancestor ( v ,w ) ) ;
w = 6;v =4 ;
System.out.println ( v +" ," +w +" :lngth 3 ==" +sca.lngth ( v ,w ) +" ,Ancestor 1 =="+ sca.ancestor ( v ,w ) ) ;
LinkedList < Integer >b =new LinkedList < Integer > ( ) ;
LinkedList < Integer >a =new LinkedList < Integer > ( ) ;
a.add ( 3 ) ;
a.add ( 4 ) ;
b.add ( 2 ) ;
b.add ( 8 ) ;
System.out.println ( a +" ," +b +" :lngth 3 ==" +sca.lngth ( a ,b ) +" ,Ancestor 0 or 1 =="+ sca.ancestor ( a ,b ) ) ;
LinkedList < Integer > c=new LinkedList < Integer > ( ) ;
LinkedList < Integer > d=new LinkedList < Integer > ( ) ;
c.add ( 6 ) ;
d.add ( 10 ) ;
System.out.println ( c +" ," +d +" :lngth 5 ==" +sca.lngth ( c ,d ) +" ,Ancestor 1 =="+ sca.ancestor ( c ,d ) ) ;
System.out.println ( " digraph1.txt -should be no cycles " ) ;
try
{
new ShortestCommonAncestor ( M ) ;
}
catch( IllegalArgumentException e )
{
System.out.println ( " cycle removed! ") ;
}
In in1 =new In ( "digraph1_cycle_1_6.txt" ) ;
Digraph M1 =new Digraph ( in1 ) ;
System.out.println ( " digraph1_cycle_1_6.txt -should be cycle at[ 1 , 6 ] " ) ;
try
{
new ShortestCommonAncestor ( M1 ) ;
}
catch( IllegalArgumentException e )
{}
In in2 =new In ( "digraph1_cycle_5_10.txt" ) ;
Digraph M2 =new Digraph ( in2 ) ;
System.out.println ( " digraph1_cycle_5_10.txt -should be cycle at[ 5 , 10 ] " ) ;
try
{
new ShortestCommonAncestor ( M2 ) ;
}
catch( IllegalArgumentException e )
{}
In in3 =new In ( "digraph1_cycle_5_5.txt" ) ;
Digraph M3 =new Digraph ( in3 ) ;
System.out.println ( " digraph1_cycle_5_5.txt -should be cycle at[ 5 , 5 ] " ) ;
try
{
new ShortestCommonAncestor ( M3 ) ;
}
catch( IllegalArgumentException e )
{}
}
}
------------------------------------------------------------------------------------
public class Outcast
{
WordNet w ;
public Outcast ( WordNet wordnet )
{
this.w =wordnet ;
}
public String outcast ( String nouns [ ] )
{
int n =nouns.length ;
int dstnce =0 ;
int longstDstnce =0 ;
longstDstnce =dstnceToAll ( nouns [ 0 ] ,nouns ,0 ) ;
String outcast =nouns [ 0 ] ;
for ( int i =1 ;i <n ;i++ )
{
dstnce =dstnceToAll ( nouns [ i ] ,nouns ,i ) ;
if( dstnce > longstDstnce )
{
longstDstnce =dstnce ;
outcast =nouns [ i ] ;
}
}
return outcast ;
}
private Integer dstnceToAll ( String a ,String [ ]b ,Integer indexAinB )
{
int acumltDstnce =0 ;
for ( int j =0 ;j <b.length ;j++ )
{;
if ( j != indexAinB )acumltDstnce +=w.distance ( a ,b [ j ] ) ;
}
return acumltDstnce ;
}
public static void main ( String args [ ] )
{
WordNet a =new WordNet ( "synsets.txt" ,"hypernyms.txt" ) ;
Outcast b =new Outcast ( a ) ;
String [ ]outcast_5 = { " horse " , " zebra " ," cat " ," bear " , " table " } ;
System.out.println ( b.outcast ( outcast_5 ) ) ;
String[ ]outcast_8 ={ " water " ," soda " ," bed " ," orange_juice " ," milk " ," apple_juice " ," tea " ," coffee " } ;
System.out.println ( b.outcast ( outcast_8 ) ) ;
String[ ]outcast_11 ={ " apple " ," pear " ," peach " ," banana " ," lime " , " lemon "," blueberry " ," strawberry " ," mango " ,"watermelon " ," potato " } ;
System.out.println ( b.outcast ( outcast_11 ) ) ;
}
}
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.