New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unable to inherit from Hadoop Mapper (new or old API) in 2.9.0 with code that functions in 2.8.1 #4603
Comments
Imported From: https://issues.scala-lang.org/browse/SI-4603?orig=1 |
@lrytz said: This is induced by the use of a type constructor as a bound. Much simplified: // J.java
public class J<T> {
public static void f(java.lang.Class<? extends J> cls) { }
// correctly it should be like this, and then it would work.
// unfortunately that doesn't mean we don't have to deal with it.
// public static void f(java.lang.Class<? extends J<?>> cls) { }
}
// S.scala
class S extends J[AnyRef]
object Test {
def main(args:Array[String]) {
J.f(classOf[S])
}
} This results in: S.scala:5: error: type mismatch;
found : java.lang.Class[S](classOf[S])
required: java.lang.Class[_ <: J]
J.f(classOf[S])
^
one error found I don't immediately see how to work around it from the scala side alone, because scala is picky about how we express these things. // nope
J.f(classOf[S]: Class[_ <: J[_]])
S.scala:5: error: type mismatch;
found : Class[_$1(in method main)] where type _$1(in method main) <: J[_]
required: java.lang.Class[_ <: J]
J.f(classOf[S]: Class[_ <: J[_]])
^
one error found
// nope
J.f(classOf[S]: Class[J[_]])
S.scala:5: error: type mismatch;
found : java.lang.Class[S](classOf[S])
required: Class[J[_]]
Note: S <: J[_] (and java.lang.Class[S](classOf[S]) <: java.lang.Class[S]), but Java-defined class Class is invariant in type T.
You may wish to investigate a wildcard type such as `_ <: J[_]`. (SLS 3.2.10)
J.f(classOf[S]: Class[J[_]])
^
one error found
// naturally, nope
J.f(classOf[S]: Class[J])
S.scala:5: error: class J takes type parameters
J.f(classOf[S]: Class[J])
^
one error found |
@odersky said: |
@paulp said: commit d832118727bc6ffc1b87519086bae47e06090bb6
|
@ijuma said: |
@leifwickland said: project.scratch=true
project.name=test
sbt.version=0.7.7
project.version=1.0
build.scala.versions=2.9.0-1
project.initialize=false The build/Project.scala looked like: import sbt._
class Project(info: ProjectInfo) extends DefaultProject(info) {
val clouderaRepo = "cloudera release" at "https://repository.cloudera.com/content/repositories/releases"
val cdhVer = "0.20.2-cdh3u0"
val hadoopCore = "org.apache.hadoop" % "hadoop-core" % cdhVer % "provided"
} The project contained exactly one source file: import org.apache.hadoop._
import org.apache.hadoop.io._
import org.apache.hadoop.conf._
import org.apache.hadoop.mapreduce._
object MyJob {
def main(args:Array[String]) {
val job = new Job(new Configuration())
job.setMapperClass(classOf[MyMapper])
}
}
class MyMapper extends Mapper[LongWritable,Text,Text,Text] {
override def map(key: LongWritable, value: Text, context: Mapper[LongWritable,Text,Text,Text]#Context) {
}
} Compiling resulted in the following error message [error] ....MyJob.scala:9: type mismatch;
[error] found : java.lang.Class[MyMapper](classOf[MyMapper])
[error] required: java.lang.Class[_ <: org.apache.hadoop.mapreduce.Mapper]
[error] job.setMapperClass(classOf[MyMapper])
[error] ^
[error] one error found |
@leifwickland said (edited on Jun 5, 2011 6:37:52 PM UTC): I should've mentioned explicitly that I tested with Cloudera's CDH3, which is based on Hadoop 20.2. |
@ijuma said: So, you're right that this is not actually fixed. It's just that you can workaround it by making the mapper an inner class of the object. Coincidentally, I made that change before I tested it with 2.9.0-1 and that made me think it had been fixed in 2.9.0-1. |
@leifwickland said (edited on Jun 6, 2011 9:31:44 PM UTC): Thanks for taking another look at this. I had actually tried the inner class trick before I posted and it failed for me. I discovered experimentally that that's because I put my inner class after the main(). If the inner class MyMapper is declared before the main, it works fine. It turns out, having MyMapper as a top level class declared in the same file and before MyJob also works around the problem. |
@leifwickland said (edited on Jun 20, 2011 7:30:11 PM UTC): [error] found : java.lang.Class[MyReducer](classOf[MyReducer])
[error] required: java.lang.Class[_ <: org.apache.hadoop.mapreduce.Reducer]
[error] job.setCombinerClass(classOf[MyReducer]) |
@leifwickland said: |
Thomas Themel (themel) said: package something.weird
import javax.xml.bind.annotation.adapters.{XmlAdapter,XmlJavaTypeAdapter}
import scala.reflect.BeanInfo
@BeanInfo class Intermediate {
var foo : String = _
var bar: String = _
}
@XmlJavaTypeAdapter(classOf[Adapter]) class Original(val foo: String, val bar: String) {
}
class Adapter extends XmlAdapter[Intermediate, Original] {
def marshal(o: Original) : Intermediate = null
def unmarshal(i: Intermediate) : Original = null
} Compiles in 2.8.1, fails in 2.9.0 with [error] jaxb.scala:11: type mismatch;
[error] found : java.lang.Class[something.weird.Adapter](classOf[something.weird.Adapter])
[error] required: java.lang.Class[_ <: javax.xml.bind.annotation.adapters.XmlAdapter]
[error] @XmlJavaTypeAdapter(classOf[Adapter]) class Original(val foo: String, val bar: String) { |
Shai Yallin (electricmonk) said: |
@odersky said: |
@paulp said: |
@leifwickland said: |
Chris Bissell (chris_bissell) said: |
Jiamin Zhao (littledumb) said (edited on Aug 28, 2011 1:49:44 AM UTC): |
Federico Ragona (federico.ragona-at-gmail.com) said (edited on Nov 12, 2013 8:05:18 PM UTC): CodeHere is my project layout: src
main
java
myhadoop
AMapper.java
scala
myhadoop
MyMapper.scala
And these are my classes' sources: // AMapper.java
package myhadoop;
import org.apache.hadoop.io.BytesWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper;
public class AMapper extends Mapper<Text, BytesWritable, Text, Text> {
public void handleThis(Text key, Mapper<Text, BytesWritable, Text, Text>.Context context, Text text) {
// not relevant to this test
}
} // MyMapper.scala
package myhadoop
import org.apache.hadoop.io.{BytesWritable, Text}
import org.apache.hadoop.mapreduce.Mapper
class MyMapper extends AMapper {
override def map(key: Text, value: BytesWritable, context: Mapper[Text, BytesWritable, Text, Text]#Context) {
handleThis(key, context, new Text(value.getBytes))
}
} ErrorCompiling this code will raise the following error:
pointing to the EnvironmentScala 2.10.3 |
@soc said: |
Federico Ragona (federico.ragona-at-gmail.com) said: I hope this helps you. |
=== What steps will reproduce the problem (please be specific and use wikiformatting)? ===
The following code works in 2.8.1:
=== What do you see instead? ===
I get this compiler error:
=== Additional information ===
Daniel Sobral vry helpfully directed me here from my Stack Overflow question at (http://stackoverflow.com/questions/6028221/how-does-one-implement-a-hadoop-mapper-in-scala-2-9-0).
The method definition of the failing call (Job.setMapperClass) is this:
The method definition of Mapper itself is this:
public class Mapper<KEYIN, VALUEIN, KEYOUT, VALUEOUT>
=== What versions of the following are you using? ===
The text was updated successfully, but these errors were encountered: